Nov 23 15:00:53 np0005532762 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 23 15:00:53 np0005532762 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 23 15:00:53 np0005532762 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:53 np0005532762 kernel: BIOS-provided physical RAM map:
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 23 15:00:53 np0005532762 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 23 15:00:53 np0005532762 kernel: NX (Execute Disable) protection: active
Nov 23 15:00:53 np0005532762 kernel: APIC: Static calls initialized
Nov 23 15:00:53 np0005532762 kernel: SMBIOS 2.8 present.
Nov 23 15:00:53 np0005532762 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 23 15:00:53 np0005532762 kernel: Hypervisor detected: KVM
Nov 23 15:00:53 np0005532762 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 23 15:00:53 np0005532762 kernel: kvm-clock: using sched offset of 8332881758 cycles
Nov 23 15:00:53 np0005532762 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 23 15:00:53 np0005532762 kernel: tsc: Detected 2799.998 MHz processor
Nov 23 15:00:53 np0005532762 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 23 15:00:53 np0005532762 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 23 15:00:53 np0005532762 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 23 15:00:53 np0005532762 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 23 15:00:53 np0005532762 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 23 15:00:53 np0005532762 kernel: Using GB pages for direct mapping
Nov 23 15:00:53 np0005532762 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 23 15:00:53 np0005532762 kernel: ACPI: Early table checksum verification disabled
Nov 23 15:00:53 np0005532762 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 23 15:00:53 np0005532762 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:53 np0005532762 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:53 np0005532762 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:53 np0005532762 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 23 15:00:53 np0005532762 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:53 np0005532762 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 23 15:00:53 np0005532762 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 23 15:00:53 np0005532762 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 23 15:00:53 np0005532762 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 23 15:00:53 np0005532762 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 23 15:00:53 np0005532762 kernel: No NUMA configuration found
Nov 23 15:00:53 np0005532762 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 23 15:00:53 np0005532762 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 23 15:00:53 np0005532762 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 23 15:00:53 np0005532762 kernel: Zone ranges:
Nov 23 15:00:53 np0005532762 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 23 15:00:53 np0005532762 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 23 15:00:53 np0005532762 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 15:00:53 np0005532762 kernel:  Device   empty
Nov 23 15:00:53 np0005532762 kernel: Movable zone start for each node
Nov 23 15:00:53 np0005532762 kernel: Early memory node ranges
Nov 23 15:00:53 np0005532762 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 23 15:00:53 np0005532762 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 23 15:00:53 np0005532762 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 23 15:00:53 np0005532762 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 23 15:00:53 np0005532762 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 23 15:00:53 np0005532762 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 23 15:00:53 np0005532762 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 23 15:00:53 np0005532762 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 23 15:00:53 np0005532762 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 23 15:00:53 np0005532762 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 23 15:00:53 np0005532762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 23 15:00:53 np0005532762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 23 15:00:53 np0005532762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 23 15:00:53 np0005532762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 23 15:00:53 np0005532762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 23 15:00:53 np0005532762 kernel: TSC deadline timer available
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Max. logical packages:   8
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Max. logical dies:       8
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Max. dies per package:   1
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Max. threads per core:   1
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Num. cores per package:     1
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Num. threads per package:   1
Nov 23 15:00:53 np0005532762 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 23 15:00:53 np0005532762 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 23 15:00:53 np0005532762 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 23 15:00:53 np0005532762 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 23 15:00:53 np0005532762 kernel: Booting paravirtualized kernel on KVM
Nov 23 15:00:53 np0005532762 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 23 15:00:53 np0005532762 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 23 15:00:53 np0005532762 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 23 15:00:53 np0005532762 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 23 15:00:53 np0005532762 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:53 np0005532762 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 23 15:00:53 np0005532762 kernel: random: crng init done
Nov 23 15:00:53 np0005532762 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: Fallback order for Node 0: 0 
Nov 23 15:00:53 np0005532762 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 23 15:00:53 np0005532762 kernel: Policy zone: Normal
Nov 23 15:00:53 np0005532762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 23 15:00:53 np0005532762 kernel: software IO TLB: area num 8.
Nov 23 15:00:53 np0005532762 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 23 15:00:53 np0005532762 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 23 15:00:53 np0005532762 kernel: ftrace: allocated 193 pages with 3 groups
Nov 23 15:00:53 np0005532762 kernel: Dynamic Preempt: voluntary
Nov 23 15:00:53 np0005532762 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 23 15:00:53 np0005532762 kernel: rcu: #011RCU event tracing is enabled.
Nov 23 15:00:53 np0005532762 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 23 15:00:53 np0005532762 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 23 15:00:53 np0005532762 kernel: #011Rude variant of Tasks RCU enabled.
Nov 23 15:00:53 np0005532762 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 23 15:00:53 np0005532762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 23 15:00:53 np0005532762 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 23 15:00:53 np0005532762 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:53 np0005532762 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:53 np0005532762 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 23 15:00:53 np0005532762 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 23 15:00:53 np0005532762 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 23 15:00:53 np0005532762 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 23 15:00:53 np0005532762 kernel: Console: colour VGA+ 80x25
Nov 23 15:00:53 np0005532762 kernel: printk: console [ttyS0] enabled
Nov 23 15:00:53 np0005532762 kernel: ACPI: Core revision 20230331
Nov 23 15:00:53 np0005532762 kernel: APIC: Switch to symmetric I/O mode setup
Nov 23 15:00:53 np0005532762 kernel: x2apic enabled
Nov 23 15:00:53 np0005532762 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 23 15:00:53 np0005532762 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 23 15:00:53 np0005532762 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 23 15:00:53 np0005532762 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 23 15:00:53 np0005532762 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 23 15:00:53 np0005532762 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 23 15:00:53 np0005532762 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 23 15:00:53 np0005532762 kernel: Spectre V2 : Mitigation: Retpolines
Nov 23 15:00:53 np0005532762 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 23 15:00:53 np0005532762 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 23 15:00:53 np0005532762 kernel: RETBleed: Mitigation: untrained return thunk
Nov 23 15:00:53 np0005532762 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 23 15:00:53 np0005532762 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 23 15:00:53 np0005532762 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 23 15:00:53 np0005532762 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 23 15:00:53 np0005532762 kernel: x86/bugs: return thunk changed
Nov 23 15:00:53 np0005532762 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 23 15:00:53 np0005532762 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 23 15:00:53 np0005532762 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 23 15:00:53 np0005532762 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 23 15:00:53 np0005532762 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 23 15:00:53 np0005532762 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 23 15:00:53 np0005532762 kernel: Freeing SMP alternatives memory: 40K
Nov 23 15:00:53 np0005532762 kernel: pid_max: default: 32768 minimum: 301
Nov 23 15:00:53 np0005532762 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 23 15:00:53 np0005532762 kernel: landlock: Up and running.
Nov 23 15:00:53 np0005532762 kernel: Yama: becoming mindful.
Nov 23 15:00:53 np0005532762 kernel: SELinux:  Initializing.
Nov 23 15:00:53 np0005532762 kernel: LSM support for eBPF active
Nov 23 15:00:53 np0005532762 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 23 15:00:53 np0005532762 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 23 15:00:53 np0005532762 kernel: ... version:                0
Nov 23 15:00:53 np0005532762 kernel: ... bit width:              48
Nov 23 15:00:53 np0005532762 kernel: ... generic registers:      6
Nov 23 15:00:53 np0005532762 kernel: ... value mask:             0000ffffffffffff
Nov 23 15:00:53 np0005532762 kernel: ... max period:             00007fffffffffff
Nov 23 15:00:53 np0005532762 kernel: ... fixed-purpose events:   0
Nov 23 15:00:53 np0005532762 kernel: ... event mask:             000000000000003f
Nov 23 15:00:53 np0005532762 kernel: signal: max sigframe size: 1776
Nov 23 15:00:53 np0005532762 kernel: rcu: Hierarchical SRCU implementation.
Nov 23 15:00:53 np0005532762 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 23 15:00:53 np0005532762 kernel: smp: Bringing up secondary CPUs ...
Nov 23 15:00:53 np0005532762 kernel: smpboot: x86: Booting SMP configuration:
Nov 23 15:00:53 np0005532762 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 23 15:00:53 np0005532762 kernel: smp: Brought up 1 node, 8 CPUs
Nov 23 15:00:53 np0005532762 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 23 15:00:53 np0005532762 kernel: node 0 deferred pages initialised in 11ms
Nov 23 15:00:53 np0005532762 kernel: Memory: 7765864K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 23 15:00:53 np0005532762 kernel: devtmpfs: initialized
Nov 23 15:00:53 np0005532762 kernel: x86/mm: Memory block size: 128MB
Nov 23 15:00:53 np0005532762 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 23 15:00:53 np0005532762 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: pinctrl core: initialized pinctrl subsystem
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 23 15:00:53 np0005532762 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 23 15:00:53 np0005532762 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 23 15:00:53 np0005532762 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 23 15:00:53 np0005532762 kernel: audit: initializing netlink subsys (disabled)
Nov 23 15:00:53 np0005532762 kernel: audit: type=2000 audit(1763928051.511:1): state=initialized audit_enabled=0 res=1
Nov 23 15:00:53 np0005532762 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 23 15:00:53 np0005532762 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 23 15:00:53 np0005532762 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 23 15:00:53 np0005532762 kernel: cpuidle: using governor menu
Nov 23 15:00:53 np0005532762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 23 15:00:53 np0005532762 kernel: PCI: Using configuration type 1 for base access
Nov 23 15:00:53 np0005532762 kernel: PCI: Using configuration type 1 for extended access
Nov 23 15:00:53 np0005532762 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 23 15:00:53 np0005532762 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 23 15:00:53 np0005532762 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 23 15:00:53 np0005532762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 23 15:00:53 np0005532762 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 23 15:00:53 np0005532762 kernel: Demotion targets for Node 0: null
Nov 23 15:00:53 np0005532762 kernel: cryptd: max_cpu_qlen set to 1000
Nov 23 15:00:53 np0005532762 kernel: ACPI: Added _OSI(Module Device)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Added _OSI(Processor Device)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 23 15:00:53 np0005532762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 23 15:00:53 np0005532762 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 23 15:00:53 np0005532762 kernel: ACPI: Interpreter enabled
Nov 23 15:00:53 np0005532762 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 23 15:00:53 np0005532762 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 23 15:00:53 np0005532762 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 23 15:00:53 np0005532762 kernel: PCI: Using E820 reservations for host bridge windows
Nov 23 15:00:53 np0005532762 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 23 15:00:53 np0005532762 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [3] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [4] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [5] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [6] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [7] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [8] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [9] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [10] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [11] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [12] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [13] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [14] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [15] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [16] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [17] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [18] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [19] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [20] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [21] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [22] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [23] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [24] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [25] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [26] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [27] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [28] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [29] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [30] registered
Nov 23 15:00:53 np0005532762 kernel: acpiphp: Slot [31] registered
Nov 23 15:00:53 np0005532762 kernel: PCI host bridge to bus 0000:00
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 23 15:00:53 np0005532762 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 23 15:00:53 np0005532762 kernel: iommu: Default domain type: Translated
Nov 23 15:00:53 np0005532762 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 23 15:00:53 np0005532762 kernel: SCSI subsystem initialized
Nov 23 15:00:53 np0005532762 kernel: ACPI: bus type USB registered
Nov 23 15:00:53 np0005532762 kernel: usbcore: registered new interface driver usbfs
Nov 23 15:00:53 np0005532762 kernel: usbcore: registered new interface driver hub
Nov 23 15:00:53 np0005532762 kernel: usbcore: registered new device driver usb
Nov 23 15:00:53 np0005532762 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 23 15:00:53 np0005532762 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 23 15:00:53 np0005532762 kernel: PTP clock support registered
Nov 23 15:00:53 np0005532762 kernel: EDAC MC: Ver: 3.0.0
Nov 23 15:00:53 np0005532762 kernel: NetLabel: Initializing
Nov 23 15:00:53 np0005532762 kernel: NetLabel:  domain hash size = 128
Nov 23 15:00:53 np0005532762 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 23 15:00:53 np0005532762 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 23 15:00:53 np0005532762 kernel: PCI: Using ACPI for IRQ routing
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 23 15:00:53 np0005532762 kernel: vgaarb: loaded
Nov 23 15:00:53 np0005532762 kernel: clocksource: Switched to clocksource kvm-clock
Nov 23 15:00:53 np0005532762 kernel: VFS: Disk quotas dquot_6.6.0
Nov 23 15:00:53 np0005532762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 23 15:00:53 np0005532762 kernel: pnp: PnP ACPI init
Nov 23 15:00:53 np0005532762 kernel: pnp: PnP ACPI: found 5 devices
Nov 23 15:00:53 np0005532762 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_INET protocol family
Nov 23 15:00:53 np0005532762 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 23 15:00:53 np0005532762 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_XDP protocol family
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 23 15:00:53 np0005532762 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 23 15:00:53 np0005532762 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 23 15:00:53 np0005532762 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72036 usecs
Nov 23 15:00:53 np0005532762 kernel: PCI: CLS 0 bytes, default 64
Nov 23 15:00:53 np0005532762 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 23 15:00:53 np0005532762 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 23 15:00:53 np0005532762 kernel: ACPI: bus type thunderbolt registered
Nov 23 15:00:53 np0005532762 kernel: Trying to unpack rootfs image as initramfs...
Nov 23 15:00:53 np0005532762 kernel: Initialise system trusted keyrings
Nov 23 15:00:53 np0005532762 kernel: Key type blacklist registered
Nov 23 15:00:53 np0005532762 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 23 15:00:53 np0005532762 kernel: zbud: loaded
Nov 23 15:00:53 np0005532762 kernel: integrity: Platform Keyring initialized
Nov 23 15:00:53 np0005532762 kernel: integrity: Machine keyring initialized
Nov 23 15:00:53 np0005532762 kernel: Freeing initrd memory: 85868K
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_ALG protocol family
Nov 23 15:00:53 np0005532762 kernel: xor: automatically using best checksumming function   avx       
Nov 23 15:00:53 np0005532762 kernel: Key type asymmetric registered
Nov 23 15:00:53 np0005532762 kernel: Asymmetric key parser 'x509' registered
Nov 23 15:00:53 np0005532762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 23 15:00:53 np0005532762 kernel: io scheduler mq-deadline registered
Nov 23 15:00:53 np0005532762 kernel: io scheduler kyber registered
Nov 23 15:00:53 np0005532762 kernel: io scheduler bfq registered
Nov 23 15:00:53 np0005532762 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 23 15:00:53 np0005532762 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 23 15:00:53 np0005532762 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 23 15:00:53 np0005532762 kernel: ACPI: button: Power Button [PWRF]
Nov 23 15:00:53 np0005532762 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 23 15:00:53 np0005532762 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 23 15:00:53 np0005532762 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 23 15:00:53 np0005532762 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 23 15:00:53 np0005532762 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 23 15:00:53 np0005532762 kernel: Non-volatile memory driver v1.3
Nov 23 15:00:53 np0005532762 kernel: rdac: device handler registered
Nov 23 15:00:53 np0005532762 kernel: hp_sw: device handler registered
Nov 23 15:00:53 np0005532762 kernel: emc: device handler registered
Nov 23 15:00:53 np0005532762 kernel: alua: device handler registered
Nov 23 15:00:53 np0005532762 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 23 15:00:53 np0005532762 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 23 15:00:53 np0005532762 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 23 15:00:53 np0005532762 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 23 15:00:53 np0005532762 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 23 15:00:53 np0005532762 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 23 15:00:53 np0005532762 kernel: usb usb1: Product: UHCI Host Controller
Nov 23 15:00:53 np0005532762 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 23 15:00:53 np0005532762 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 23 15:00:53 np0005532762 kernel: hub 1-0:1.0: USB hub found
Nov 23 15:00:53 np0005532762 kernel: hub 1-0:1.0: 2 ports detected
Nov 23 15:00:53 np0005532762 kernel: usbcore: registered new interface driver usbserial_generic
Nov 23 15:00:53 np0005532762 kernel: usbserial: USB Serial support registered for generic
Nov 23 15:00:53 np0005532762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 23 15:00:53 np0005532762 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 23 15:00:53 np0005532762 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 23 15:00:53 np0005532762 kernel: mousedev: PS/2 mouse device common for all mice
Nov 23 15:00:53 np0005532762 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 23 15:00:53 np0005532762 kernel: rtc_cmos 00:04: registered as rtc0
Nov 23 15:00:53 np0005532762 kernel: rtc_cmos 00:04: setting system clock to 2025-11-23T20:00:52 UTC (1763928052)
Nov 23 15:00:53 np0005532762 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 23 15:00:53 np0005532762 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 23 15:00:53 np0005532762 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 23 15:00:53 np0005532762 kernel: usbcore: registered new interface driver usbhid
Nov 23 15:00:53 np0005532762 kernel: usbhid: USB HID core driver
Nov 23 15:00:53 np0005532762 kernel: drop_monitor: Initializing network drop monitor service
Nov 23 15:00:53 np0005532762 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 23 15:00:53 np0005532762 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 23 15:00:53 np0005532762 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 23 15:00:53 np0005532762 kernel: Initializing XFRM netlink socket
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_INET6 protocol family
Nov 23 15:00:53 np0005532762 kernel: Segment Routing with IPv6
Nov 23 15:00:53 np0005532762 kernel: NET: Registered PF_PACKET protocol family
Nov 23 15:00:53 np0005532762 kernel: mpls_gso: MPLS GSO support
Nov 23 15:00:53 np0005532762 kernel: IPI shorthand broadcast: enabled
Nov 23 15:00:53 np0005532762 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 23 15:00:53 np0005532762 kernel: AES CTR mode by8 optimization enabled
Nov 23 15:00:53 np0005532762 kernel: sched_clock: Marking stable (1166015658, 158927723)->(1401774591, -76831210)
Nov 23 15:00:53 np0005532762 kernel: registered taskstats version 1
Nov 23 15:00:53 np0005532762 kernel: Loading compiled-in X.509 certificates
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 23 15:00:53 np0005532762 kernel: Demotion targets for Node 0: null
Nov 23 15:00:53 np0005532762 kernel: page_owner is disabled
Nov 23 15:00:53 np0005532762 kernel: Key type .fscrypt registered
Nov 23 15:00:53 np0005532762 kernel: Key type fscrypt-provisioning registered
Nov 23 15:00:53 np0005532762 kernel: Key type big_key registered
Nov 23 15:00:53 np0005532762 kernel: Key type encrypted registered
Nov 23 15:00:53 np0005532762 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 23 15:00:53 np0005532762 kernel: Loading compiled-in module X.509 certificates
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 23 15:00:53 np0005532762 kernel: ima: Allocated hash algorithm: sha256
Nov 23 15:00:53 np0005532762 kernel: ima: No architecture policies found
Nov 23 15:00:53 np0005532762 kernel: evm: Initialising EVM extended attributes:
Nov 23 15:00:53 np0005532762 kernel: evm: security.selinux
Nov 23 15:00:53 np0005532762 kernel: evm: security.SMACK64 (disabled)
Nov 23 15:00:53 np0005532762 kernel: evm: security.SMACK64EXEC (disabled)
Nov 23 15:00:53 np0005532762 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 23 15:00:53 np0005532762 kernel: evm: security.SMACK64MMAP (disabled)
Nov 23 15:00:53 np0005532762 kernel: evm: security.apparmor (disabled)
Nov 23 15:00:53 np0005532762 kernel: evm: security.ima
Nov 23 15:00:53 np0005532762 kernel: evm: security.capability
Nov 23 15:00:53 np0005532762 kernel: evm: HMAC attrs: 0x1
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 23 15:00:53 np0005532762 kernel: Running certificate verification RSA selftest
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 23 15:00:53 np0005532762 kernel: Running certificate verification ECDSA selftest
Nov 23 15:00:53 np0005532762 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 23 15:00:53 np0005532762 kernel: clk: Disabling unused clocks
Nov 23 15:00:53 np0005532762 kernel: Freeing unused decrypted memory: 2028K
Nov 23 15:00:53 np0005532762 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 23 15:00:53 np0005532762 kernel: Write protecting the kernel read-only data: 30720k
Nov 23 15:00:53 np0005532762 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 23 15:00:53 np0005532762 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 23 15:00:53 np0005532762 kernel: Run /init as init process
Nov 23 15:00:53 np0005532762 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 15:00:53 np0005532762 systemd: Detected virtualization kvm.
Nov 23 15:00:53 np0005532762 systemd: Detected architecture x86-64.
Nov 23 15:00:53 np0005532762 systemd: Running in initrd.
Nov 23 15:00:53 np0005532762 systemd: No hostname configured, using default hostname.
Nov 23 15:00:53 np0005532762 systemd: Hostname set to <localhost>.
Nov 23 15:00:53 np0005532762 systemd: Initializing machine ID from VM UUID.
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: Manufacturer: QEMU
Nov 23 15:00:53 np0005532762 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 23 15:00:53 np0005532762 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 23 15:00:53 np0005532762 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 23 15:00:53 np0005532762 systemd: Queued start job for default target Initrd Default Target.
Nov 23 15:00:53 np0005532762 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 15:00:53 np0005532762 systemd: Reached target Local Encrypted Volumes.
Nov 23 15:00:53 np0005532762 systemd: Reached target Initrd /usr File System.
Nov 23 15:00:53 np0005532762 systemd: Reached target Local File Systems.
Nov 23 15:00:53 np0005532762 systemd: Reached target Path Units.
Nov 23 15:00:53 np0005532762 systemd: Reached target Slice Units.
Nov 23 15:00:53 np0005532762 systemd: Reached target Swaps.
Nov 23 15:00:53 np0005532762 systemd: Reached target Timer Units.
Nov 23 15:00:53 np0005532762 systemd: Listening on D-Bus System Message Bus Socket.
Nov 23 15:00:53 np0005532762 systemd: Listening on Journal Socket (/dev/log).
Nov 23 15:00:53 np0005532762 systemd: Listening on Journal Socket.
Nov 23 15:00:53 np0005532762 systemd: Listening on udev Control Socket.
Nov 23 15:00:53 np0005532762 systemd: Listening on udev Kernel Socket.
Nov 23 15:00:53 np0005532762 systemd: Reached target Socket Units.
Nov 23 15:00:53 np0005532762 systemd: Starting Create List of Static Device Nodes...
Nov 23 15:00:53 np0005532762 systemd: Starting Journal Service...
Nov 23 15:00:53 np0005532762 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 15:00:53 np0005532762 systemd: Starting Apply Kernel Variables...
Nov 23 15:00:53 np0005532762 systemd: Starting Create System Users...
Nov 23 15:00:53 np0005532762 systemd: Starting Setup Virtual Console...
Nov 23 15:00:53 np0005532762 systemd: Finished Create List of Static Device Nodes.
Nov 23 15:00:53 np0005532762 systemd: Finished Apply Kernel Variables.
Nov 23 15:00:53 np0005532762 systemd: Finished Create System Users.
Nov 23 15:00:53 np0005532762 systemd-journald[303]: Journal started
Nov 23 15:00:53 np0005532762 systemd-journald[303]: Runtime Journal (/run/log/journal/dffd854b01ce4a28b7a632174dbe320c) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:00:53 np0005532762 systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 23 15:00:53 np0005532762 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 23 15:00:53 np0005532762 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 23 15:00:53 np0005532762 systemd: Starting Create Static Device Nodes in /dev...
Nov 23 15:00:53 np0005532762 systemd: Started Journal Service.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 15:00:53 np0005532762 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 15:00:53 np0005532762 systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 15:00:53 np0005532762 systemd[1]: Finished Setup Virtual Console.
Nov 23 15:00:53 np0005532762 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting dracut cmdline hook...
Nov 23 15:00:53 np0005532762 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Nov 23 15:00:53 np0005532762 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 23 15:00:53 np0005532762 systemd[1]: Finished dracut cmdline hook.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting dracut pre-udev hook...
Nov 23 15:00:53 np0005532762 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 23 15:00:53 np0005532762 kernel: device-mapper: uevent: version 1.0.3
Nov 23 15:00:53 np0005532762 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 23 15:00:53 np0005532762 kernel: RPC: Registered named UNIX socket transport module.
Nov 23 15:00:53 np0005532762 kernel: RPC: Registered udp transport module.
Nov 23 15:00:53 np0005532762 kernel: RPC: Registered tcp transport module.
Nov 23 15:00:53 np0005532762 kernel: RPC: Registered tcp-with-tls transport module.
Nov 23 15:00:53 np0005532762 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 23 15:00:53 np0005532762 rpc.statd[440]: Version 2.5.4 starting
Nov 23 15:00:53 np0005532762 rpc.statd[440]: Initializing NSM state
Nov 23 15:00:53 np0005532762 rpc.idmapd[445]: Setting log level to 0
Nov 23 15:00:53 np0005532762 systemd[1]: Finished dracut pre-udev hook.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 15:00:53 np0005532762 systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 15:00:53 np0005532762 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting dracut pre-trigger hook...
Nov 23 15:00:53 np0005532762 systemd[1]: Finished dracut pre-trigger hook.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting Coldplug All udev Devices...
Nov 23 15:00:53 np0005532762 systemd[1]: Created slice Slice /system/modprobe.
Nov 23 15:00:53 np0005532762 systemd[1]: Starting Load Kernel Module configfs...
Nov 23 15:00:53 np0005532762 systemd[1]: Finished Coldplug All udev Devices.
Nov 23 15:00:53 np0005532762 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:00:53 np0005532762 systemd[1]: Finished Load Kernel Module configfs.
Nov 23 15:00:53 np0005532762 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 15:00:53 np0005532762 systemd[1]: Reached target Network.
Nov 23 15:00:53 np0005532762 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 15:00:53 np0005532762 systemd[1]: Starting dracut initqueue hook...
Nov 23 15:00:53 np0005532762 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 23 15:00:53 np0005532762 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 23 15:00:53 np0005532762 kernel: vda: vda1
Nov 23 15:00:53 np0005532762 kernel: scsi host0: ata_piix
Nov 23 15:00:53 np0005532762 kernel: scsi host1: ata_piix
Nov 23 15:00:53 np0005532762 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 23 15:00:53 np0005532762 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 23 15:00:53 np0005532762 systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:00:54 np0005532762 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target Initrd Root Device.
Nov 23 15:00:54 np0005532762 systemd[1]: Mounting Kernel Configuration File System...
Nov 23 15:00:54 np0005532762 systemd[1]: Mounted Kernel Configuration File System.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target System Initialization.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target Basic System.
Nov 23 15:00:54 np0005532762 kernel: ata1: found unknown device (class 0)
Nov 23 15:00:54 np0005532762 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 23 15:00:54 np0005532762 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 23 15:00:54 np0005532762 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 23 15:00:54 np0005532762 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 23 15:00:54 np0005532762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 23 15:00:54 np0005532762 systemd[1]: Finished dracut initqueue hook.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 23 15:00:54 np0005532762 systemd[1]: Reached target Remote File Systems.
Nov 23 15:00:54 np0005532762 systemd[1]: Starting dracut pre-mount hook...
Nov 23 15:00:54 np0005532762 systemd[1]: Finished dracut pre-mount hook.
Nov 23 15:00:54 np0005532762 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 23 15:00:54 np0005532762 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 23 15:00:54 np0005532762 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 23 15:00:54 np0005532762 systemd[1]: Mounting /sysroot...
Nov 23 15:00:55 np0005532762 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 23 15:00:55 np0005532762 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 23 15:02:24 np0005532762 systemd[1]: sysroot.mount: Mounting timed out. Terminating.
Nov 23 15:02:35 np0005532762 kernel: XFS (vda1): Ending clean mount
Nov 23 15:02:48 np0005532762 systemd[1]: sysroot.mount: Mount process exited, code=killed, status=15/TERM
Nov 23 15:02:48 np0005532762 systemd[1]: Mounted /sysroot.
Nov 23 15:02:48 np0005532762 systemd[1]: Reached target Initrd Root File System.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 23 15:02:48 np0005532762 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 23 15:02:48 np0005532762 systemd[1]: Reached target Initrd File Systems.
Nov 23 15:02:48 np0005532762 systemd[1]: Reached target Initrd Default Target.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting dracut mount hook...
Nov 23 15:02:48 np0005532762 systemd[1]: Finished dracut mount hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 23 15:02:48 np0005532762 rpc.idmapd[445]: exiting on signal 15
Nov 23 15:02:48 np0005532762 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Network.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Timer Units.
Nov 23 15:02:48 np0005532762 systemd[1]: dbus.socket: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Initrd Default Target.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Basic System.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Initrd Root Device.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Initrd /usr File System.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Path Units.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Remote File Systems.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Slice Units.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Socket Units.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target System Initialization.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Local File Systems.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Swaps.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut mount hook.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut pre-mount hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut initqueue hook.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Apply Kernel Variables.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Coldplug All udev Devices.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut pre-trigger hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Setup Virtual Console.
Nov 23 15:02:48 np0005532762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Closed udev Control Socket.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Closed udev Kernel Socket.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut pre-udev hook.
Nov 23 15:02:48 np0005532762 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped dracut cmdline hook.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting Cleanup udev Database...
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 23 15:02:48 np0005532762 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 23 15:02:48 np0005532762 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Stopped Create System Users.
Nov 23 15:02:48 np0005532762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 23 15:02:48 np0005532762 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 23 15:02:48 np0005532762 systemd[1]: Finished Cleanup udev Database.
Nov 23 15:02:48 np0005532762 systemd[1]: Reached target Switch Root.
Nov 23 15:02:48 np0005532762 systemd[1]: Starting Switch Root...
Nov 23 15:02:48 np0005532762 systemd[1]: Switching root.
Nov 23 15:02:48 np0005532762 systemd-journald[303]: Journal stopped
Nov 23 15:02:49 np0005532762 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 23 15:02:49 np0005532762 kernel: audit: type=1404 audit(1763928168.781:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:02:49 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:02:49 np0005532762 kernel: audit: type=1403 audit(1763928168.951:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 23 15:02:49 np0005532762 systemd: Successfully loaded SELinux policy in 175.501ms.
Nov 23 15:02:49 np0005532762 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.254ms.
Nov 23 15:02:49 np0005532762 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 15:02:49 np0005532762 systemd: Detected virtualization kvm.
Nov 23 15:02:49 np0005532762 systemd: Detected architecture x86-64.
Nov 23 15:02:49 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:02:49 np0005532762 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd: Stopped Switch Root.
Nov 23 15:02:49 np0005532762 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 23 15:02:49 np0005532762 systemd: Created slice Slice /system/getty.
Nov 23 15:02:49 np0005532762 systemd: Created slice Slice /system/serial-getty.
Nov 23 15:02:49 np0005532762 systemd: Created slice Slice /system/sshd-keygen.
Nov 23 15:02:49 np0005532762 systemd: Created slice User and Session Slice.
Nov 23 15:02:49 np0005532762 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 15:02:49 np0005532762 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 23 15:02:49 np0005532762 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 23 15:02:49 np0005532762 systemd: Reached target Local Encrypted Volumes.
Nov 23 15:02:49 np0005532762 systemd: Stopped target Switch Root.
Nov 23 15:02:49 np0005532762 systemd: Stopped target Initrd File Systems.
Nov 23 15:02:49 np0005532762 systemd: Stopped target Initrd Root File System.
Nov 23 15:02:49 np0005532762 systemd: Reached target Local Integrity Protected Volumes.
Nov 23 15:02:49 np0005532762 systemd: Reached target Path Units.
Nov 23 15:02:49 np0005532762 systemd: Reached target rpc_pipefs.target.
Nov 23 15:02:49 np0005532762 systemd: Reached target Slice Units.
Nov 23 15:02:49 np0005532762 systemd: Reached target Swaps.
Nov 23 15:02:49 np0005532762 systemd: Reached target Local Verity Protected Volumes.
Nov 23 15:02:49 np0005532762 systemd: Listening on RPCbind Server Activation Socket.
Nov 23 15:02:49 np0005532762 systemd: Reached target RPC Port Mapper.
Nov 23 15:02:49 np0005532762 systemd: Listening on Process Core Dump Socket.
Nov 23 15:02:49 np0005532762 systemd: Listening on initctl Compatibility Named Pipe.
Nov 23 15:02:49 np0005532762 systemd: Listening on udev Control Socket.
Nov 23 15:02:49 np0005532762 systemd: Listening on udev Kernel Socket.
Nov 23 15:02:49 np0005532762 systemd: Mounting Huge Pages File System...
Nov 23 15:02:49 np0005532762 systemd: Mounting POSIX Message Queue File System...
Nov 23 15:02:49 np0005532762 systemd: Mounting Kernel Debug File System...
Nov 23 15:02:49 np0005532762 systemd: Mounting Kernel Trace File System...
Nov 23 15:02:49 np0005532762 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 15:02:49 np0005532762 systemd: Starting Create List of Static Device Nodes...
Nov 23 15:02:49 np0005532762 systemd: Starting Load Kernel Module configfs...
Nov 23 15:02:49 np0005532762 systemd: Starting Load Kernel Module drm...
Nov 23 15:02:49 np0005532762 systemd: Starting Load Kernel Module efi_pstore...
Nov 23 15:02:49 np0005532762 systemd: Starting Load Kernel Module fuse...
Nov 23 15:02:49 np0005532762 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 23 15:02:49 np0005532762 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd: Stopped File System Check on Root Device.
Nov 23 15:02:49 np0005532762 systemd: Stopped Journal Service.
Nov 23 15:02:49 np0005532762 kernel: fuse: init (API version 7.37)
Nov 23 15:02:49 np0005532762 systemd: Starting Journal Service...
Nov 23 15:02:49 np0005532762 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532762 systemd: Starting Generate network units from Kernel command line...
Nov 23 15:02:49 np0005532762 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:49 np0005532762 systemd: Starting Remount Root and Kernel File Systems...
Nov 23 15:02:49 np0005532762 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532762 systemd: Starting Apply Kernel Variables...
Nov 23 15:02:49 np0005532762 systemd: Starting Coldplug All udev Devices...
Nov 23 15:02:49 np0005532762 systemd: Mounted Huge Pages File System.
Nov 23 15:02:49 np0005532762 systemd: Mounted POSIX Message Queue File System.
Nov 23 15:02:49 np0005532762 systemd: Mounted Kernel Debug File System.
Nov 23 15:02:49 np0005532762 systemd-journald[676]: Journal started
Nov 23 15:02:49 np0005532762 systemd-journald[676]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:02:49 np0005532762 systemd[1]: Queued start job for default target Multi-User System.
Nov 23 15:02:49 np0005532762 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd: Started Journal Service.
Nov 23 15:02:49 np0005532762 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 23 15:02:49 np0005532762 systemd[1]: Mounted Kernel Trace File System.
Nov 23 15:02:49 np0005532762 kernel: ACPI: bus type drm_connector registered
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Create List of Static Device Nodes.
Nov 23 15:02:49 np0005532762 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Load Kernel Module configfs.
Nov 23 15:02:49 np0005532762 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Load Kernel Module drm.
Nov 23 15:02:49 np0005532762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 23 15:02:49 np0005532762 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Load Kernel Module fuse.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Generate network units from Kernel command line.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Apply Kernel Variables.
Nov 23 15:02:49 np0005532762 systemd[1]: Mounting FUSE Control File System...
Nov 23 15:02:49 np0005532762 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Rebuild Hardware Database...
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 23 15:02:49 np0005532762 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Load/Save OS Random Seed...
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Create System Users...
Nov 23 15:02:49 np0005532762 systemd[1]: Mounted FUSE Control File System.
Nov 23 15:02:49 np0005532762 systemd-journald[676]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 23 15:02:49 np0005532762 systemd-journald[676]: Received client request to flush runtime journal.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Load/Save OS Random Seed.
Nov 23 15:02:49 np0005532762 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Create System Users.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Coldplug All udev Devices.
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 15:02:49 np0005532762 systemd[1]: Reached target Preparation for Local File Systems.
Nov 23 15:02:49 np0005532762 systemd[1]: Reached target Local File Systems.
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 23 15:02:49 np0005532762 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 23 15:02:49 np0005532762 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532762 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Automatic Boot Loader Update...
Nov 23 15:02:49 np0005532762 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 23 15:02:49 np0005532762 systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 15:02:49 np0005532762 bootctl[695]: Couldn't find EFI system partition, skipping.
Nov 23 15:02:49 np0005532762 systemd[1]: Finished Automatic Boot Loader Update.
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Security Auditing Service...
Nov 23 15:02:50 np0005532762 systemd[1]: Starting RPC Bind...
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Rebuild Journal Catalog...
Nov 23 15:02:50 np0005532762 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 23 15:02:50 np0005532762 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 23 15:02:50 np0005532762 systemd[1]: Started RPC Bind.
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Rebuild Journal Catalog.
Nov 23 15:02:50 np0005532762 augenrules[706]: /sbin/augenrules: No change
Nov 23 15:02:50 np0005532762 augenrules[721]: No rules
Nov 23 15:02:50 np0005532762 augenrules[721]: enabled 1
Nov 23 15:02:50 np0005532762 augenrules[721]: failure 1
Nov 23 15:02:50 np0005532762 augenrules[721]: pid 701
Nov 23 15:02:50 np0005532762 augenrules[721]: rate_limit 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_limit 8192
Nov 23 15:02:50 np0005532762 augenrules[721]: lost 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog 3
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532762 augenrules[721]: enabled 1
Nov 23 15:02:50 np0005532762 augenrules[721]: failure 1
Nov 23 15:02:50 np0005532762 augenrules[721]: pid 701
Nov 23 15:02:50 np0005532762 augenrules[721]: rate_limit 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_limit 8192
Nov 23 15:02:50 np0005532762 augenrules[721]: lost 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog 3
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532762 augenrules[721]: enabled 1
Nov 23 15:02:50 np0005532762 augenrules[721]: failure 1
Nov 23 15:02:50 np0005532762 augenrules[721]: pid 701
Nov 23 15:02:50 np0005532762 augenrules[721]: rate_limit 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_limit 8192
Nov 23 15:02:50 np0005532762 augenrules[721]: lost 0
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog 1
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time 60000
Nov 23 15:02:50 np0005532762 augenrules[721]: backlog_wait_time_actual 0
Nov 23 15:02:50 np0005532762 systemd[1]: Started Security Auditing Service.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Rebuild Hardware Database.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 15:02:50 np0005532762 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Update is Completed...
Nov 23 15:02:50 np0005532762 systemd[1]: Finished Update is Completed.
Nov 23 15:02:50 np0005532762 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 15:02:50 np0005532762 systemd[1]: Reached target System Initialization.
Nov 23 15:02:50 np0005532762 systemd[1]: Started dnf makecache --timer.
Nov 23 15:02:50 np0005532762 systemd[1]: Started Daily rotation of log files.
Nov 23 15:02:50 np0005532762 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 23 15:02:50 np0005532762 systemd[1]: Reached target Timer Units.
Nov 23 15:02:50 np0005532762 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 15:02:50 np0005532762 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 23 15:02:50 np0005532762 systemd[1]: Reached target Socket Units.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting D-Bus System Message Bus...
Nov 23 15:02:50 np0005532762 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:50 np0005532762 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 23 15:02:50 np0005532762 systemd[1]: Starting Load Kernel Module configfs...
Nov 23 15:02:51 np0005532762 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 15:02:51 np0005532762 systemd[1]: Finished Load Kernel Module configfs.
Nov 23 15:02:51 np0005532762 systemd-udevd[753]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:02:51 np0005532762 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 23 15:02:51 np0005532762 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 23 15:02:51 np0005532762 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 23 15:02:51 np0005532762 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 23 15:02:51 np0005532762 systemd[1]: Started D-Bus System Message Bus.
Nov 23 15:02:51 np0005532762 dbus-broker-lau[755]: Ready
Nov 23 15:02:51 np0005532762 systemd[1]: Reached target Basic System.
Nov 23 15:02:51 np0005532762 systemd[1]: Starting NTP client/server...
Nov 23 15:02:51 np0005532762 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 23 15:02:51 np0005532762 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 23 15:02:51 np0005532762 systemd[1]: Starting IPv4 firewall with iptables...
Nov 23 15:02:51 np0005532762 systemd[1]: Started irqbalance daemon.
Nov 23 15:02:51 np0005532762 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 23 15:02:51 np0005532762 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532762 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532762 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:02:51 np0005532762 systemd[1]: Reached target sshd-keygen.target.
Nov 23 15:02:51 np0005532762 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 23 15:02:51 np0005532762 systemd[1]: Reached target User and Group Name Lookups.
Nov 23 15:02:51 np0005532762 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 23 15:02:51 np0005532762 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 23 15:02:51 np0005532762 kernel: Console: switching to colour dummy device 80x25
Nov 23 15:02:51 np0005532762 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 23 15:02:51 np0005532762 kernel: [drm] features: -context_init
Nov 23 15:02:51 np0005532762 kernel: [drm] number of scanouts: 1
Nov 23 15:02:51 np0005532762 kernel: [drm] number of cap sets: 0
Nov 23 15:02:51 np0005532762 systemd[1]: Starting User Login Management...
Nov 23 15:02:51 np0005532762 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 23 15:02:51 np0005532762 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 23 15:02:51 np0005532762 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 23 15:02:51 np0005532762 kernel: Console: switching to colour frame buffer device 128x48
Nov 23 15:02:51 np0005532762 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 23 15:02:51 np0005532762 chronyd[808]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 15:02:51 np0005532762 chronyd[808]: Loaded 0 symmetric keys
Nov 23 15:02:51 np0005532762 chronyd[808]: Using right/UTC timezone to obtain leap second data
Nov 23 15:02:51 np0005532762 chronyd[808]: Loaded seccomp filter (level 2)
Nov 23 15:02:51 np0005532762 systemd[1]: Started NTP client/server.
Nov 23 15:02:51 np0005532762 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 23 15:02:51 np0005532762 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 23 15:02:51 np0005532762 systemd-logind[793]: New seat seat0.
Nov 23 15:02:51 np0005532762 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 15:02:51 np0005532762 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 15:02:51 np0005532762 systemd[1]: Started User Login Management.
Nov 23 15:02:51 np0005532762 kernel: kvm_amd: TSC scaling supported
Nov 23 15:02:51 np0005532762 kernel: kvm_amd: Nested Virtualization enabled
Nov 23 15:02:51 np0005532762 kernel: kvm_amd: Nested Paging enabled
Nov 23 15:02:51 np0005532762 kernel: kvm_amd: LBR virtualization supported
Nov 23 15:02:51 np0005532762 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Nov 23 15:02:51 np0005532762 systemd[1]: Finished IPv4 firewall with iptables.
Nov 23 15:02:52 np0005532762 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 23 Nov 2025 20:02:52 +0000. Up 120.68 seconds.
Nov 23 15:02:52 np0005532762 systemd[1]: run-cloud\x2dinit-tmp-tmpkxj0f9qi.mount: Deactivated successfully.
Nov 23 15:02:52 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 15:02:52 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 15:02:52 np0005532762 systemd-hostnamed[852]: Hostname set to <np0005532762.novalocal> (static)
Nov 23 15:02:52 np0005532762 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 23 15:02:52 np0005532762 systemd[1]: Reached target Preparation for Network.
Nov 23 15:02:52 np0005532762 systemd[1]: Starting Network Manager...
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.6954] NetworkManager (version 1.54.1-1.el9) is starting... (boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.6959] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7100] manager[0x55956a72c080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7154] hostname: hostname: using hostnamed
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7155] hostname: static hostname changed from (none) to "np0005532762.novalocal"
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7159] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7307] manager[0x55956a72c080]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7310] manager[0x55956a72c080]: rfkill: WWAN hardware radio set enabled
Nov 23 15:02:52 np0005532762 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7412] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7413] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7413] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7414] manager: Networking is enabled by state file
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7416] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7468] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7497] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7525] dhcp: init: Using DHCP client 'internal'
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7529] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7544] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7559] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7568] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7578] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7581] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7614] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:02:52 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7619] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7621] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7622] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7624] device (eth0): carrier: link connected
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7625] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7630] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7639] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:02:52 np0005532762 systemd[1]: Started Network Manager.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7645] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7646] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7649] manager: NetworkManager state is now CONNECTING
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7650] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7659] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7661] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:02:52 np0005532762 systemd[1]: Reached target Network.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7697] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 15:02:52 np0005532762 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7705] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7727] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 23 15:02:52 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7924] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7927] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7933] device (lo): Activation: successful, device activated.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7938] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7941] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7945] device (eth0): Activation: successful, device activated.
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7950] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:02:52 np0005532762 NetworkManager[856]: <info>  [1763928172.7954] manager: startup complete
Nov 23 15:02:52 np0005532762 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 23 15:02:52 np0005532762 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 15:02:52 np0005532762 systemd[1]: Reached target NFS client services.
Nov 23 15:02:52 np0005532762 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 15:02:52 np0005532762 systemd[1]: Reached target Remote File Systems.
Nov 23 15:02:52 np0005532762 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 15:02:52 np0005532762 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:02:52 np0005532762 systemd[1]: Starting Cloud-init: Network Stage...
Nov 23 15:02:53 np0005532762 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 23 Nov 2025 20:02:53 +0000. Up 121.73 seconds.
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.106         | 255.255.255.0 | global | fa:16:3e:47:56:6b |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe47:566b/64 |       .       |  link  | fa:16:3e:47:56:6b |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 23 15:02:53 np0005532762 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 15:02:55 np0005532762 cloud-init[920]: Generating public/private rsa key pair.
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key fingerprint is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: SHA256:2X/a+x3yzoKiOQtmqaCDXQNokkjl6+60CwjQt540rQw root@np0005532762.novalocal
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key's randomart image is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: +---[RSA 3072]----+
Nov 23 15:02:55 np0005532762 cloud-init[920]: |  ..             |
Nov 23 15:02:55 np0005532762 cloud-init[920]: | o.              |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |+o...            |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |*....o   o       |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |+ Eo+ . S .      |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |o .=o+.    .     |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |o+ +=*      o... |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |+ * = .... . =+ o|
Nov 23 15:02:55 np0005532762 cloud-init[920]: |...*.  ++ . . =*o|
Nov 23 15:02:55 np0005532762 cloud-init[920]: +----[SHA256]-----+
Nov 23 15:02:55 np0005532762 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key fingerprint is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: SHA256:lOGSmVM7XlPQR0FLASu744xqdsqMd652eN3QM/lMVPg root@np0005532762.novalocal
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key's randomart image is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: +---[ECDSA 256]---+
Nov 23 15:02:55 np0005532762 cloud-init[920]: |        o .oo+*+ |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |       * + ..oo..|
Nov 23 15:02:55 np0005532762 cloud-init[920]: |      * * + ...o |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |       = o +  . E|
Nov 23 15:02:55 np0005532762 cloud-init[920]: |        S .. o   |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |          ..= .  |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |       . .oo *   |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |     +* =+... o  |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |    .=*Xo o      |
Nov 23 15:02:55 np0005532762 cloud-init[920]: +----[SHA256]-----+
Nov 23 15:02:55 np0005532762 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 23 15:02:55 np0005532762 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key fingerprint is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: SHA256:S7mzEKj5Gi6Efkt6Y9s35zwwLQUqFptapNpL6wpsgkg root@np0005532762.novalocal
Nov 23 15:02:55 np0005532762 cloud-init[920]: The key's randomart image is:
Nov 23 15:02:55 np0005532762 cloud-init[920]: +--[ED25519 256]--+
Nov 23 15:02:55 np0005532762 cloud-init[920]: |                 |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |   o   .         |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |  o + . .        |
Nov 23 15:02:55 np0005532762 cloud-init[920]: | . * o   o       |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |oE+ o . S        |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |Booo   * +       |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |B+++  . B        |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |=o=B.  +.=       |
Nov 23 15:02:55 np0005532762 cloud-init[920]: |.=B+=.. =o.      |
Nov 23 15:02:55 np0005532762 cloud-init[920]: +----[SHA256]-----+
Nov 23 15:02:55 np0005532762 systemd[1]: Finished Cloud-init: Network Stage.
Nov 23 15:02:55 np0005532762 systemd[1]: Reached target Cloud-config availability.
Nov 23 15:02:55 np0005532762 systemd[1]: Reached target Network is Online.
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Cloud-init: Config Stage...
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Crash recovery kernel arming...
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Notify NFS peers of a restart...
Nov 23 15:02:55 np0005532762 systemd[1]: Starting System Logging Service...
Nov 23 15:02:55 np0005532762 sm-notify[1003]: Version 2.5.4 starting
Nov 23 15:02:55 np0005532762 systemd[1]: Starting OpenSSH server daemon...
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Permit User Sessions...
Nov 23 15:02:55 np0005532762 systemd[1]: Started Notify NFS peers of a restart.
Nov 23 15:02:55 np0005532762 systemd[1]: Finished Permit User Sessions.
Nov 23 15:02:55 np0005532762 systemd[1]: Started Command Scheduler.
Nov 23 15:02:55 np0005532762 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 23 15:02:55 np0005532762 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 23 15:02:55 np0005532762 systemd[1]: Started Getty on tty1.
Nov 23 15:02:55 np0005532762 systemd[1]: Started Serial Getty on ttyS0.
Nov 23 15:02:55 np0005532762 systemd[1]: Reached target Login Prompts.
Nov 23 15:02:55 np0005532762 systemd[1]: Started OpenSSH server daemon.
Nov 23 15:02:55 np0005532762 systemd[1]: Started System Logging Service.
Nov 23 15:02:55 np0005532762 systemd[1]: Reached target Multi-User System.
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 23 15:02:55 np0005532762 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 23 15:02:55 np0005532762 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 23 15:02:55 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:02:55 np0005532762 kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 23 15:02:55 np0005532762 kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 23 15:02:55 np0005532762 cloud-init[1132]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 23 Nov 2025 20:02:55 +0000. Up 124.01 seconds.
Nov 23 15:02:55 np0005532762 systemd[1]: Finished Cloud-init: Config Stage.
Nov 23 15:02:55 np0005532762 systemd[1]: Starting Cloud-init: Final Stage...
Nov 23 15:02:55 np0005532762 dracut[1267]: dracut-057-102.git20250818.el9
Nov 23 15:02:55 np0005532762 cloud-init[1290]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 23 Nov 2025 20:02:55 +0000. Up 124.46 seconds.
Nov 23 15:02:55 np0005532762 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 23 15:02:55 np0005532762 cloud-init[1319]: #############################################################
Nov 23 15:02:55 np0005532762 cloud-init[1322]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 23 15:02:55 np0005532762 cloud-init[1329]: 256 SHA256:lOGSmVM7XlPQR0FLASu744xqdsqMd652eN3QM/lMVPg root@np0005532762.novalocal (ECDSA)
Nov 23 15:02:55 np0005532762 cloud-init[1336]: 256 SHA256:S7mzEKj5Gi6Efkt6Y9s35zwwLQUqFptapNpL6wpsgkg root@np0005532762.novalocal (ED25519)
Nov 23 15:02:55 np0005532762 cloud-init[1343]: 3072 SHA256:2X/a+x3yzoKiOQtmqaCDXQNokkjl6+60CwjQt540rQw root@np0005532762.novalocal (RSA)
Nov 23 15:02:55 np0005532762 cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 23 15:02:55 np0005532762 cloud-init[1351]: #############################################################
Nov 23 15:02:56 np0005532762 cloud-init[1290]: Cloud-init v. 24.4-7.el9 finished at Sun, 23 Nov 2025 20:02:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 124.66 seconds
Nov 23 15:02:56 np0005532762 systemd[1]: Finished Cloud-init: Final Stage.
Nov 23 15:02:56 np0005532762 systemd[1]: Reached target Cloud-init target.
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 15:02:56 np0005532762 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: memstrack is not available
Nov 23 15:02:57 np0005532762 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 15:02:57 np0005532762 dracut[1269]: memstrack is not available
Nov 23 15:02:57 np0005532762 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 15:02:57 np0005532762 dracut[1269]: *** Including module: systemd ***
Nov 23 15:02:57 np0005532762 dracut[1269]: *** Including module: fips ***
Nov 23 15:02:58 np0005532762 dracut[1269]: *** Including module: systemd-initrd ***
Nov 23 15:02:58 np0005532762 dracut[1269]: *** Including module: i18n ***
Nov 23 15:02:58 np0005532762 dracut[1269]: *** Including module: drm ***
Nov 23 15:02:58 np0005532762 dracut[1269]: *** Including module: prefixdevname ***
Nov 23 15:02:58 np0005532762 dracut[1269]: *** Including module: kernel-modules ***
Nov 23 15:02:58 np0005532762 kernel: block vda: the capability attribute has been deprecated.
Nov 23 15:02:59 np0005532762 chronyd[808]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 23 15:02:59 np0005532762 chronyd[808]: System clock TAI offset set to 37 seconds
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: kernel-modules-extra ***
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: qemu ***
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: fstab-sys ***
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: rootfs-block ***
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: terminfo ***
Nov 23 15:02:59 np0005532762 dracut[1269]: *** Including module: udev-rules ***
Nov 23 15:03:00 np0005532762 dracut[1269]: Skipping udev rule: 91-permissions.rules
Nov 23 15:03:00 np0005532762 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: virtiofs ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: dracut-systemd ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: usrmount ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: base ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: fs-lib ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: kdumpbase ***
Nov 23 15:03:00 np0005532762 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 23 15:03:00 np0005532762 dracut[1269]:  microcode_ctl module: mangling fw_dir
Nov 23 15:03:00 np0005532762 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 25 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 31 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 28 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 32 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 30 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 23 15:03:01 np0005532762 irqbalance[786]: IRQ 29 affinity is now unmanaged
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 23 15:03:01 np0005532762 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 23 15:03:01 np0005532762 dracut[1269]: *** Including module: openssl ***
Nov 23 15:03:01 np0005532762 dracut[1269]: *** Including module: shutdown ***
Nov 23 15:03:01 np0005532762 dracut[1269]: *** Including module: squash ***
Nov 23 15:03:01 np0005532762 dracut[1269]: *** Including modules done ***
Nov 23 15:03:01 np0005532762 dracut[1269]: *** Installing kernel module dependencies ***
Nov 23 15:03:02 np0005532762 dracut[1269]: *** Installing kernel module dependencies done ***
Nov 23 15:03:02 np0005532762 dracut[1269]: *** Resolving executable dependencies ***
Nov 23 15:03:02 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:03:05 np0005532762 dracut[1269]: *** Resolving executable dependencies done ***
Nov 23 15:03:05 np0005532762 dracut[1269]: *** Generating early-microcode cpio image ***
Nov 23 15:03:05 np0005532762 dracut[1269]: *** Store current command line parameters ***
Nov 23 15:03:05 np0005532762 dracut[1269]: Stored kernel commandline:
Nov 23 15:03:05 np0005532762 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Nov 23 15:03:06 np0005532762 dracut[1269]: *** Install squash loader ***
Nov 23 15:03:07 np0005532762 dracut[1269]: *** Squashing the files inside the initramfs ***
Nov 23 15:03:08 np0005532762 systemd[1]: Created slice User Slice of UID 1000.
Nov 23 15:03:09 np0005532762 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 23 15:03:09 np0005532762 systemd-logind[793]: New session 1 of user zuul.
Nov 23 15:03:09 np0005532762 dracut[1269]: *** Squashing the files inside the initramfs done ***
Nov 23 15:03:09 np0005532762 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 23 15:03:09 np0005532762 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 23 15:03:09 np0005532762 dracut[1269]: *** Hardlinking files ***
Nov 23 15:03:09 np0005532762 systemd[1]: Starting User Manager for UID 1000...
Nov 23 15:03:09 np0005532762 dracut[1269]: *** Hardlinking files done ***
Nov 23 15:03:09 np0005532762 systemd[4152]: Queued start job for default target Main User Target.
Nov 23 15:03:09 np0005532762 systemd[4152]: Created slice User Application Slice.
Nov 23 15:03:09 np0005532762 systemd[4152]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:03:09 np0005532762 systemd[4152]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:03:09 np0005532762 systemd[4152]: Reached target Paths.
Nov 23 15:03:09 np0005532762 systemd[4152]: Reached target Timers.
Nov 23 15:03:09 np0005532762 systemd[4152]: Starting D-Bus User Message Bus Socket...
Nov 23 15:03:09 np0005532762 systemd[4152]: Starting Create User's Volatile Files and Directories...
Nov 23 15:03:09 np0005532762 systemd[4152]: Finished Create User's Volatile Files and Directories.
Nov 23 15:03:09 np0005532762 systemd[4152]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:03:09 np0005532762 systemd[4152]: Reached target Sockets.
Nov 23 15:03:09 np0005532762 systemd[4152]: Reached target Basic System.
Nov 23 15:03:09 np0005532762 systemd[4152]: Reached target Main User Target.
Nov 23 15:03:09 np0005532762 systemd[4152]: Startup finished in 133ms.
Nov 23 15:03:09 np0005532762 systemd[1]: Started User Manager for UID 1000.
Nov 23 15:03:09 np0005532762 systemd[1]: Started Session 1 of User zuul.
Nov 23 15:03:10 np0005532762 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 23 15:03:10 np0005532762 python3[4266]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:10 np0005532762 kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 23 15:03:10 np0005532762 kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 23 15:03:10 np0005532762 systemd[1]: Finished Crash recovery kernel arming.
Nov 23 15:03:10 np0005532762 systemd[1]: Startup finished in 1.493s (kernel) + 1min 55.898s (initrd) + 22.192s (userspace) = 2min 19.584s.
Nov 23 15:03:14 np0005532762 python3[4408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:21 np0005532762 python3[4466]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:22 np0005532762 python3[4506]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 23 15:03:22 np0005532762 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:03:24 np0005532762 python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtaVH+Hfp24GC/nLOCl87TIJDf22iIpXaDmkip6hyFZ60lyVpfYxFl6Z4FqAbKci+Ock4NHD78xcKBN+nqpMJyIdLDl6IlqwxWyUc/lX5/TIm6PknK9ykLQzLzQZzRt1Mk1hK89Am3bbY9TVh2ZdujVyOmjWLVqA/0FhkvYKJWaid0pgs6EdTygKGzSfc7V7Zm4ijA+aHyny1AE6h4zzdGP/d6AL8fjaGD/LpcU6DnbbD9WHzrmCJXOyJa5/Ky5sttSY3WpH33eL7o554W1og4Dq5c+z/Pc0NlJT1DXPpxrtrLpJ57vb04Ae1Wg5PeG+MECxQWJRQBS51hNbLb4KTkDErpMaWbfcwdnzisQHazTgjNidmG34/j4ZvJ/NP2OkEBabHukyMvOCFw3Ew9lQ5eR2EiNjFtdvI12kRiXyyk9Ti3dsncy9kfInD5nPUeVGnxbIGdwP/T5Z2crXhgdrIWCRjRMvV/756tjKFXfzl/eIzO6UcLkU2I9qdqZpL0h8U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:24 np0005532762 python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:25 np0005532762 python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:25 np0005532762 python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928205.0392907-252-198545758705626/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa follow=False checksum=b8b11f458d3dcaed5d0ce620e052c77faf8a3312 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:26 np0005532762 python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:26 np0005532762 python3[4922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928206.0701554-307-143178181218506/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b927b3f7e94443b59884cfdc0421ba80_id_rsa.pub follow=False checksum=c143f6be1d4420dad576f5c3c6738e84bfb79a9b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:28 np0005532762 python3[4970]: ansible-ping Invoked with data=pong
Nov 23 15:03:29 np0005532762 python3[4994]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:03:31 np0005532762 python3[5052]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 23 15:03:32 np0005532762 python3[5084]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:32 np0005532762 python3[5108]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:33 np0005532762 python3[5132]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:34 np0005532762 python3[5156]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:34 np0005532762 python3[5180]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:34 np0005532762 python3[5204]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:37 np0005532762 python3[5230]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:37 np0005532762 python3[5308]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:38 np0005532762 python3[5381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928217.2989328-32-185764935004977/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:38 np0005532762 python3[5429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532762 python3[5453]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532762 python3[5477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:39 np0005532762 python3[5501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532762 python3[5525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532762 python3[5549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532762 python3[5573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:40 np0005532762 python3[5597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532762 python3[5621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532762 python3[5645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:41 np0005532762 python3[5669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532762 python3[5693]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532762 python3[5717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:42 np0005532762 python3[5741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532762 python3[5765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532762 python3[5789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532762 python3[5813]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:43 np0005532762 python3[5837]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532762 python3[5861]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532762 python3[5885]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:44 np0005532762 python3[5909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532762 python3[5933]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532762 python3[5957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:45 np0005532762 python3[5981]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:46 np0005532762 python3[6005]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:46 np0005532762 python3[6029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:03:48 np0005532762 python3[6055]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:03:48 np0005532762 systemd[1]: Starting Time & Date Service...
Nov 23 15:03:48 np0005532762 systemd[1]: Started Time & Date Service.
Nov 23 15:03:48 np0005532762 systemd-timedated[6057]: Changed time zone to 'UTC' (UTC).
Nov 23 15:03:49 np0005532762 python3[6086]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:49 np0005532762 python3[6162]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:49 np0005532762 python3[6233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763928229.4444351-253-21382755043799/source _original_basename=tmpd23_9fs0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:50 np0005532762 python3[6333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:51 np0005532762 python3[6404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928230.4333925-303-278983814884466/source _original_basename=tmpqvy1r4lj follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:52 np0005532762 python3[6506]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:52 np0005532762 python3[6579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763928231.69662-382-96088037328665/source _original_basename=tmp9nuo4d3z follow=False checksum=c8c0add412d571e63862b10c4bf0a26f0fcae547 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:53 np0005532762 python3[6627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:53 np0005532762 python3[6653]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:53 np0005532762 python3[6733]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:03:54 np0005532762 python3[6806]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928233.4755368-452-190537495677611/source _original_basename=tmpug09hiw6 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:03:54 np0005532762 python3[6857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4746-eccf-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:03:55 np0005532762 python3[6885]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4746-eccf-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 23 15:03:57 np0005532762 python3[6913]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:04:16 np0005532762 python3[6939]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:04:18 np0005532762 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:05:16 np0005532762 systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 23 15:05:23 np0005532762 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 23 15:05:23 np0005532762 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2552] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:05:23 np0005532762 systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2802] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2841] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2848] device (eth1): carrier: link connected
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2851] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2862] policy: auto-activating connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2869] device (eth1): Activation: starting connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2870] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2874] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2881] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:05:23 np0005532762 NetworkManager[856]: <info>  [1763928323.2888] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:23 np0005532762 systemd[4152]: Starting Mark boot as successful...
Nov 23 15:05:23 np0005532762 systemd[4152]: Finished Mark boot as successful.
Nov 23 15:05:23 np0005532762 systemd-logind[793]: New session 3 of user zuul.
Nov 23 15:05:23 np0005532762 systemd[1]: Started Session 3 of User zuul.
Nov 23 15:05:24 np0005532762 python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-f412-6632-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:05:34 np0005532762 python3[7054]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:05:34 np0005532762 python3[7127]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763928333.9925566-155-212175523479482/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1a02ea7f5b2269dc33b0d44c617e69d144a93207 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:05:35 np0005532762 python3[7177]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:05:35 np0005532762 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 15:05:35 np0005532762 systemd[1]: Stopped Network Manager Wait Online.
Nov 23 15:05:35 np0005532762 systemd[1]: Stopping Network Manager Wait Online...
Nov 23 15:05:35 np0005532762 systemd[1]: Stopping Network Manager...
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2308] caught SIGTERM, shutting down normally.
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2323] dhcp4 (eth0): state changed no lease
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2326] manager: NetworkManager state is now CONNECTING
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2469] dhcp4 (eth1): canceled DHCP transaction
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2469] dhcp4 (eth1): state changed no lease
Nov 23 15:05:35 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:05:35 np0005532762 NetworkManager[856]: <info>  [1763928335.2531] exiting (success)
Nov 23 15:05:35 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:05:35 np0005532762 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 15:05:35 np0005532762 systemd[1]: Stopped Network Manager.
Nov 23 15:05:35 np0005532762 systemd[1]: NetworkManager.service: Consumed 1.208s CPU time, 10.0M memory peak.
Nov 23 15:05:35 np0005532762 systemd[1]: Starting Network Manager...
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.3317] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.3318] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.3393] manager[0x560c24eeb070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:05:35 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 15:05:35 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4520] hostname: hostname: using hostnamed
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4521] hostname: static hostname changed from (none) to "np0005532762.novalocal"
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4526] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4531] manager[0x560c24eeb070]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4532] manager[0x560c24eeb070]: rfkill: WWAN hardware radio set enabled
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4558] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4559] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4560] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4560] manager: Networking is enabled by state file
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4563] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4566] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4594] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4604] dhcp: init: Using DHCP client 'internal'
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4606] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4613] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4620] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4628] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4635] device (eth0): carrier: link connected
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4639] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4646] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4647] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4656] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4663] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4669] device (eth1): carrier: link connected
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4673] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4679] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f) (indicated)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4680] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4686] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4693] device (eth1): Activation: starting connection 'Wired connection 1' (b8d72197-27ea-3e22-9d94-94c7806ccb0f)
Nov 23 15:05:35 np0005532762 systemd[1]: Started Network Manager.
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4699] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4706] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4712] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4715] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4719] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4726] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4730] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4735] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4740] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4748] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4752] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4761] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4764] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4778] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4783] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.4791] device (lo): Activation: successful, device activated.
Nov 23 15:05:35 np0005532762 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6572] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6585] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6658] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6694] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6696] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6700] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6704] device (eth0): Activation: successful, device activated.
Nov 23 15:05:35 np0005532762 NetworkManager[7191]: <info>  [1763928335.6711] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:05:35 np0005532762 python3[7244]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-f412-6632-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:05:45 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:06:05 np0005532762 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3723] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:06:20 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:06:20 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3949] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3952] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3961] device (eth1): Activation: successful, device activated.
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3968] manager: startup complete
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3970] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <warn>  [1763928380.3975] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.3982] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4099] dhcp4 (eth1): canceled DHCP transaction
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4102] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4103] dhcp4 (eth1): state changed no lease
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4120] policy: auto-activating connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4125] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4127] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4130] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4137] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4146] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4201] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4205] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:06:20 np0005532762 NetworkManager[7191]: <info>  [1763928380.4212] device (eth1): Activation: successful, device activated.
Nov 23 15:06:30 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:06:35 np0005532762 systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Nov 23 15:06:35 np0005532762 systemd[1]: session-3.scope: Deactivated successfully.
Nov 23 15:06:35 np0005532762 systemd[1]: session-3.scope: Consumed 1.650s CPU time.
Nov 23 15:06:35 np0005532762 systemd-logind[793]: Removed session 3.
Nov 23 15:07:10 np0005532762 systemd-logind[793]: New session 4 of user zuul.
Nov 23 15:07:10 np0005532762 systemd[1]: Started Session 4 of User zuul.
Nov 23 15:07:11 np0005532762 python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:07:11 np0005532762 python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928431.0703604-373-246174899525340/source _original_basename=tmprnfgy_rq follow=False checksum=3134bd1d03fba929119b03a893a690ab48d9a2ea backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:07:14 np0005532762 systemd[1]: session-4.scope: Deactivated successfully.
Nov 23 15:07:14 np0005532762 systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Nov 23 15:07:14 np0005532762 systemd-logind[793]: Removed session 4.
Nov 23 15:08:41 np0005532762 systemd[4152]: Created slice User Background Tasks Slice.
Nov 23 15:08:41 np0005532762 systemd[4152]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 15:08:41 np0005532762 systemd[4152]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 15:12:18 np0005532762 systemd-logind[793]: New session 5 of user zuul.
Nov 23 15:12:18 np0005532762 systemd[1]: Started Session 5 of User zuul.
Nov 23 15:12:18 np0005532762 python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cd8-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:19 np0005532762 python3[7538]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:19 np0005532762 python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:19 np0005532762 python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:20 np0005532762 python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:20 np0005532762 python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:21 np0005532762 python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:12:21 np0005532762 python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928741.0655618-511-277801935526292/source _original_basename=tmpnxpk74nv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:12:22 np0005532762 python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:12:22 np0005532762 systemd[1]: Reloading.
Nov 23 15:12:22 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:12:24 np0005532762 python3[7899]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 23 15:12:24 np0005532762 python3[7925]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532762 python3[7953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532762 python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:25 np0005532762 python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:26 np0005532762 python3[8036]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-bee1-1da1-000000001cdf-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:12:26 np0005532762 python3[8066]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:12:29 np0005532762 systemd[1]: session-5.scope: Deactivated successfully.
Nov 23 15:12:29 np0005532762 systemd[1]: session-5.scope: Consumed 4.110s CPU time.
Nov 23 15:12:29 np0005532762 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Nov 23 15:12:29 np0005532762 systemd-logind[793]: Removed session 5.
Nov 23 15:12:31 np0005532762 systemd-logind[793]: New session 6 of user zuul.
Nov 23 15:12:31 np0005532762 irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 23 15:12:31 np0005532762 irqbalance[786]: IRQ 27 affinity is now unmanaged
Nov 23 15:12:31 np0005532762 systemd[1]: Started Session 6 of User zuul.
Nov 23 15:12:31 np0005532762 python3[8100]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:12:46 np0005532762 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:12:46 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:12:55 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  Converting 385 SID table entries...
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:13:04 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:06 np0005532762 setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 23 15:13:06 np0005532762 setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 23 15:13:17 np0005532762 kernel: SELinux:  Converting 388 SID table entries...
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:13:17 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:13:40 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 15:13:40 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:13:40 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:13:40 np0005532762 systemd[1]: Reloading.
Nov 23 15:13:40 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:13:40 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:13:46 np0005532762 python3[12718]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ba7b-575b-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:13:47 np0005532762 kernel: evm: overlay not supported
Nov 23 15:13:47 np0005532762 systemd[4152]: Starting D-Bus User Message Bus...
Nov 23 15:13:47 np0005532762 dbus-broker-launch[13572]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 23 15:13:47 np0005532762 dbus-broker-launch[13572]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 23 15:13:47 np0005532762 systemd[4152]: Started D-Bus User Message Bus.
Nov 23 15:13:47 np0005532762 dbus-broker-lau[13572]: Ready
Nov 23 15:13:47 np0005532762 systemd[4152]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 15:13:47 np0005532762 systemd[4152]: Created slice Slice /user.
Nov 23 15:13:47 np0005532762 systemd[4152]: podman-13446.scope: unit configures an IP firewall, but not running as root.
Nov 23 15:13:47 np0005532762 systemd[4152]: (This warning is only shown for the first unit using IP firewalling.)
Nov 23 15:13:47 np0005532762 systemd[4152]: Started podman-13446.scope.
Nov 23 15:13:47 np0005532762 systemd[4152]: Started podman-pause-fb01b08e.scope.
Nov 23 15:13:47 np0005532762 systemd[1]: session-6.scope: Deactivated successfully.
Nov 23 15:13:47 np0005532762 systemd[1]: session-6.scope: Consumed 59.860s CPU time.
Nov 23 15:13:47 np0005532762 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Nov 23 15:13:47 np0005532762 systemd-logind[793]: Removed session 6.
Nov 23 15:14:07 np0005532762 systemd-logind[793]: New session 7 of user zuul.
Nov 23 15:14:07 np0005532762 systemd[1]: Started Session 7 of User zuul.
Nov 23 15:14:08 np0005532762 python3[20389]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:08 np0005532762 python3[20576]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:09 np0005532762 python3[20969]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532762.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 15:14:10 np0005532762 python3[21193]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA87KGYjjoyogEDAuKEHrB6Oxv3mIvu13bhzDbjQjrNyl3D2q3szz508Yk2UHZaBKDHJbLxThWYWGwZpHtr+UTo= zuul@np0005532760.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 15:14:10 np0005532762 python3[21439]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:14:10 np0005532762 python3[21679]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763928850.225103-151-2060321674992/source _original_basename=tmpmdj0othu follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:14:11 np0005532762 python3[22012]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Nov 23 15:14:11 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 15:14:11 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 15:14:11 np0005532762 systemd-hostnamed[22100]: Changed pretty hostname to 'compute-1'
Nov 23 15:14:11 np0005532762 systemd-hostnamed[22100]: Hostname set to <compute-1> (static)
Nov 23 15:14:11 np0005532762 NetworkManager[7191]: <info>  [1763928851.9481] hostname: static hostname changed from "np0005532762.novalocal" to "compute-1"
Nov 23 15:14:11 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:14:11 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:14:12 np0005532762 systemd[1]: session-7.scope: Deactivated successfully.
Nov 23 15:14:12 np0005532762 systemd[1]: session-7.scope: Consumed 2.273s CPU time.
Nov 23 15:14:12 np0005532762 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Nov 23 15:14:12 np0005532762 systemd-logind[793]: Removed session 7.
Nov 23 15:14:21 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:14:39 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:14:39 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:14:39 np0005532762 systemd[1]: man-db-cache-update.service: Consumed 1min 3.859s CPU time.
Nov 23 15:14:39 np0005532762 systemd[1]: run-r98f26c7c34084d02b10991c9a52bb160.service: Deactivated successfully.
Nov 23 15:14:41 np0005532762 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:16:31 np0005532762 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 23 15:16:31 np0005532762 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 23 15:16:31 np0005532762 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 23 15:16:31 np0005532762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 23 15:18:11 np0005532762 systemd-logind[793]: New session 8 of user zuul.
Nov 23 15:18:12 np0005532762 systemd[1]: Started Session 8 of User zuul.
Nov 23 15:18:12 np0005532762 python3[29980]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:18:14 np0005532762 python3[30096]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:14 np0005532762 python3[30169]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:15 np0005532762 python3[30195]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:15 np0005532762 python3[30268]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:15 np0005532762 python3[30294]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:16 np0005532762 python3[30367]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:16 np0005532762 python3[30393]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:17 np0005532762 python3[30466]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:17 np0005532762 python3[30492]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:17 np0005532762 python3[30565]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:17 np0005532762 python3[30591]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:18 np0005532762 python3[30664]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:18 np0005532762 python3[30690]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:18:18 np0005532762 python3[30763]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763929094.2987156-33976-272682233161012/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:18:31 np0005532762 python3[30816]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:23:30 np0005532762 systemd[1]: session-8.scope: Deactivated successfully.
Nov 23 15:23:30 np0005532762 systemd[1]: session-8.scope: Consumed 5.035s CPU time.
Nov 23 15:23:30 np0005532762 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Nov 23 15:23:30 np0005532762 systemd-logind[793]: Removed session 8.
Nov 23 15:25:07 np0005532762 systemd[1]: Starting dnf makecache...
Nov 23 15:25:07 np0005532762 dnf[30868]: Failed determining last makecache time.
Nov 23 15:25:07 np0005532762 dnf[30868]: delorean-openstack-barbican-42b4c41831408a8e323 144 kB/s |  13 kB     00:00
Nov 23 15:25:07 np0005532762 dnf[30868]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.6 MB/s |  65 kB     00:00
Nov 23 15:25:07 np0005532762 dnf[30868]: delorean-openstack-cinder-1c00d6490d88e436f26ef 960 kB/s |  32 kB     00:00
Nov 23 15:25:07 np0005532762 dnf[30868]: delorean-python-stevedore-c4acc5639fd2329372142 4.4 MB/s | 131 kB     00:00
Nov 23 15:25:07 np0005532762 dnf[30868]: delorean-python-observabilityclient-2f31846d73c 362 kB/s |  25 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-os-net-config-bbae2ed8a159b0435a473f38 1.7 MB/s | 356 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 470 kB/s |  42 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-python-designate-tests-tempest-347fdbc 314 kB/s |  18 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-glance-1fd12c29b339f30fe823e 395 kB/s |  18 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.0 MB/s |  29 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-manila-3c01b7181572c95dac462 944 kB/s |  25 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-python-whitebox-neutron-tests-tempest- 4.4 MB/s | 154 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-octavia-ba397f07a7331190208c 444 kB/s |  26 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-openstack-watcher-c014f81a8647287f6dcc 191 kB/s |  16 kB     00:00
Nov 23 15:25:08 np0005532762 dnf[30868]: delorean-python-tcib-1124124ec06aadbac34f0d340b  76 kB/s | 7.4 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 652 kB/s | 144 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: delorean-openstack-swift-dc98a8463506ac520c469a 167 kB/s |  14 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: delorean-python-tempestconf-8515371b7cceebd4282 1.9 MB/s |  53 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.4 MB/s |  96 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: CentOS Stream 9 - BaseOS                         53 kB/s | 7.3 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: CentOS Stream 9 - AppStream                      72 kB/s | 7.4 kB     00:00
Nov 23 15:25:09 np0005532762 dnf[30868]: CentOS Stream 9 - CRB                            78 kB/s | 7.2 kB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: CentOS Stream 9 - Extras packages                72 kB/s | 8.3 kB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: dlrn-antelope-testing                            28 MB/s | 1.1 MB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: dlrn-antelope-build-deps                         14 MB/s | 461 kB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: centos9-rabbitmq                                9.4 MB/s | 123 kB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: centos9-storage                                  23 MB/s | 415 kB     00:00
Nov 23 15:25:10 np0005532762 dnf[30868]: centos9-opstools                                4.4 MB/s |  51 kB     00:00
Nov 23 15:25:11 np0005532762 dnf[30868]: NFV SIG OpenvSwitch                              27 MB/s | 454 kB     00:00
Nov 23 15:25:11 np0005532762 dnf[30868]: repo-setup-centos-appstream                      66 MB/s |  25 MB     00:00
Nov 23 15:25:17 np0005532762 dnf[30868]: repo-setup-centos-baseos                         69 MB/s | 8.8 MB     00:00
Nov 23 15:25:18 np0005532762 dnf[30868]: repo-setup-centos-highavailability               18 MB/s | 744 kB     00:00
Nov 23 15:25:19 np0005532762 dnf[30868]: repo-setup-centos-powertools                     36 MB/s | 7.3 MB     00:00
Nov 23 15:25:22 np0005532762 dnf[30868]: Extra Packages for Enterprise Linux 9 - x86_64   17 MB/s |  20 MB     00:01
Nov 23 15:25:35 np0005532762 dnf[30868]: Metadata cache created.
Nov 23 15:25:35 np0005532762 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 15:25:35 np0005532762 systemd[1]: Finished dnf makecache.
Nov 23 15:25:35 np0005532762 systemd[1]: dnf-makecache.service: Consumed 24.147s CPU time.
Nov 23 15:29:41 np0005532762 systemd-logind[793]: New session 9 of user zuul.
Nov 23 15:29:41 np0005532762 systemd[1]: Started Session 9 of User zuul.
Nov 23 15:29:42 np0005532762 python3.9[31161]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:29:43 np0005532762 python3.9[31342]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:29:51 np0005532762 systemd[1]: session-9.scope: Deactivated successfully.
Nov 23 15:29:51 np0005532762 systemd[1]: session-9.scope: Consumed 8.338s CPU time.
Nov 23 15:29:51 np0005532762 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Nov 23 15:29:51 np0005532762 systemd-logind[793]: Removed session 9.
Nov 23 15:30:06 np0005532762 systemd-logind[793]: New session 10 of user zuul.
Nov 23 15:30:06 np0005532762 systemd[1]: Started Session 10 of User zuul.
Nov 23 15:30:07 np0005532762 python3.9[31556]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 15:30:09 np0005532762 python3.9[31730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:10 np0005532762 python3.9[31882]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:30:11 np0005532762 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 23 15:30:11 np0005532762 irqbalance[786]: IRQ 26 affinity is now unmanaged
Nov 23 15:30:11 np0005532762 python3.9[32035]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:30:12 np0005532762 python3.9[32187]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:13 np0005532762 python3.9[32339]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:30:14 np0005532762 python3.9[32462]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763929812.8966084-178-220138245458415/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:15 np0005532762 python3.9[32614]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:16 np0005532762 python3.9[32770]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:30:17 np0005532762 python3.9[32922]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:30:17 np0005532762 python3.9[33072]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:30:23 np0005532762 python3.9[33325]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:30:24 np0005532762 python3.9[33475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:25 np0005532762 python3.9[33629]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:30:26 np0005532762 python3.9[33787]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:30:27 np0005532762 python3.9[33871]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:31:11 np0005532762 systemd[1]: Reloading.
Nov 23 15:31:11 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:11 np0005532762 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 23 15:31:12 np0005532762 systemd[1]: Reloading.
Nov 23 15:31:12 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:12 np0005532762 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 23 15:31:12 np0005532762 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 23 15:31:12 np0005532762 systemd[1]: Reloading.
Nov 23 15:31:12 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:31:12 np0005532762 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 23 15:31:13 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:31:13 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:31:13 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:32:27 np0005532762 kernel: SELinux:  Converting 2718 SID table entries...
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:32:27 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:32:27 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 23 15:32:27 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:32:27 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:32:27 np0005532762 systemd[1]: Reloading.
Nov 23 15:32:27 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:32:27 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:32:28 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:32:28 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:32:28 np0005532762 systemd[1]: run-re06584b61c12435db70c4ebec33d82b7.service: Deactivated successfully.
Nov 23 15:32:28 np0005532762 python3.9[35391]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:32:31 np0005532762 python3.9[35672]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 15:32:32 np0005532762 python3.9[35824]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 15:32:36 np0005532762 python3.9[35979]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:38 np0005532762 python3.9[36132]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 15:32:44 np0005532762 python3.9[36285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:32:47 np0005532762 python3.9[36437]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:32:47 np0005532762 python3.9[36560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763929966.6912267-667-126697307908575/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:49 np0005532762 python3.9[36714]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:32:49 np0005532762 python3.9[36866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:32:50 np0005532762 python3.9[37019]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:32:52 np0005532762 python3.9[37171]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 15:32:53 np0005532762 python3.9[37324]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:32:53 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:32:53 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:32:54 np0005532762 python3.9[37483]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:32:55 np0005532762 python3.9[37643]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 15:32:55 np0005532762 python3.9[37796]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:32:56 np0005532762 python3.9[37956]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 15:32:58 np0005532762 python3.9[38108]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:00 np0005532762 python3.9[38261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:01 np0005532762 python3.9[38413]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:33:01 np0005532762 python3.9[38536]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929980.8500473-1024-104727972828753/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:03 np0005532762 python3.9[38688]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:33:03 np0005532762 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:33:03 np0005532762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 23 15:33:03 np0005532762 kernel: Bridge firewalling registered
Nov 23 15:33:03 np0005532762 systemd-modules-load[38692]: Inserted module 'br_netfilter'
Nov 23 15:33:03 np0005532762 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:33:04 np0005532762 python3.9[38847]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:33:04 np0005532762 python3.9[38970]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763929983.6371734-1094-63561188763700/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:33:06 np0005532762 python3.9[39122]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:09 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:33:09 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:33:09 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:33:09 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:33:09 np0005532762 systemd[1]: Reloading.
Nov 23 15:33:09 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:09 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:33:11 np0005532762 python3.9[41346]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:33:12 np0005532762 python3.9[42420]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 15:33:13 np0005532762 python3.9[43129]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:33:13 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:33:13 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:33:13 np0005532762 systemd[1]: man-db-cache-update.service: Consumed 4.593s CPU time.
Nov 23 15:33:13 np0005532762 systemd[1]: run-r4665eb4fbeb246e3bb403347632edc8e.service: Deactivated successfully.
Nov 23 15:33:14 np0005532762 python3.9[43283]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:14 np0005532762 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:33:15 np0005532762 systemd[1]: Starting Authorization Manager...
Nov 23 15:33:15 np0005532762 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:33:15 np0005532762 polkitd[43500]: Started polkitd version 0.117
Nov 23 15:33:15 np0005532762 systemd[1]: Started Authorization Manager.
Nov 23 15:33:16 np0005532762 python3.9[43670]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:16 np0005532762 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 15:33:16 np0005532762 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 15:33:16 np0005532762 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 15:33:16 np0005532762 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:33:16 np0005532762 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:33:17 np0005532762 python3.9[43834]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 15:33:21 np0005532762 python3.9[43986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:21 np0005532762 systemd[1]: Reloading.
Nov 23 15:33:21 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:22 np0005532762 python3.9[44175]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:33:22 np0005532762 systemd[1]: Reloading.
Nov 23 15:33:22 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:33:24 np0005532762 python3.9[44364]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:25 np0005532762 python3.9[44517]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:25 np0005532762 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 23 15:33:25 np0005532762 python3.9[44670]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:28 np0005532762 python3.9[44832]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:33:29 np0005532762 python3.9[44985]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:33:29 np0005532762 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 15:33:29 np0005532762 systemd[1]: Stopped Apply Kernel Variables.
Nov 23 15:33:29 np0005532762 systemd[1]: Stopping Apply Kernel Variables...
Nov 23 15:33:29 np0005532762 systemd[1]: Starting Apply Kernel Variables...
Nov 23 15:33:29 np0005532762 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 15:33:29 np0005532762 systemd[1]: Finished Apply Kernel Variables.
Nov 23 15:33:30 np0005532762 systemd[1]: session-10.scope: Deactivated successfully.
Nov 23 15:33:30 np0005532762 systemd[1]: session-10.scope: Consumed 2min 12.797s CPU time.
Nov 23 15:33:30 np0005532762 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Nov 23 15:33:30 np0005532762 systemd-logind[793]: Removed session 10.
Nov 23 15:33:35 np0005532762 systemd-logind[793]: New session 11 of user zuul.
Nov 23 15:33:35 np0005532762 systemd[1]: Started Session 11 of User zuul.
Nov 23 15:33:37 np0005532762 python3.9[45171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:33:38 np0005532762 python3.9[45327]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 15:33:39 np0005532762 python3.9[45480]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:33:41 np0005532762 python3.9[45638]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:33:42 np0005532762 python3.9[45798]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:33:43 np0005532762 python3.9[45882]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:33:46 np0005532762 python3.9[46045]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:33:58 np0005532762 kernel: SELinux:  Converting 2730 SID table entries...
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:33:58 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:33:58 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 23 15:33:58 np0005532762 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 23 15:34:00 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:00 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:01 np0005532762 systemd[1]: Reloading.
Nov 23 15:34:01 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:01 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:01 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:02 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:02 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:02 np0005532762 systemd[1]: run-r8caf589173554f3fb3dc848d06fd1a03.service: Deactivated successfully.
Nov 23 15:34:03 np0005532762 python3.9[47147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:34:03 np0005532762 systemd[1]: Reloading.
Nov 23 15:34:03 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:03 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:03 np0005532762 systemd[1]: Starting Open vSwitch Database Unit...
Nov 23 15:34:03 np0005532762 chown[47189]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 23 15:34:03 np0005532762 ovs-ctl[47194]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 23 15:34:03 np0005532762 ovs-ctl[47194]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 23 15:34:03 np0005532762 ovs-ctl[47194]: Starting ovsdb-server [  OK  ]
Nov 23 15:34:03 np0005532762 ovs-vsctl[47243]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 23 15:34:04 np0005532762 ovs-vsctl[47260]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d8ff4ac4-2bee-48db-b79e-2466bc4db046\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 23 15:34:04 np0005532762 ovs-ctl[47194]: Configuring Open vSwitch system IDs [  OK  ]
Nov 23 15:34:04 np0005532762 ovs-vsctl[47269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 23 15:34:04 np0005532762 ovs-ctl[47194]: Enabling remote OVSDB managers [  OK  ]
Nov 23 15:34:04 np0005532762 systemd[1]: Started Open vSwitch Database Unit.
Nov 23 15:34:04 np0005532762 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 23 15:34:04 np0005532762 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 23 15:34:04 np0005532762 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 23 15:34:04 np0005532762 kernel: openvswitch: Open vSwitch switching datapath
Nov 23 15:34:04 np0005532762 ovs-ctl[47313]: Inserting openvswitch module [  OK  ]
Nov 23 15:34:04 np0005532762 ovs-ctl[47282]: Starting ovs-vswitchd [  OK  ]
Nov 23 15:34:04 np0005532762 ovs-vsctl[47331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Nov 23 15:34:04 np0005532762 ovs-ctl[47282]: Enabling remote OVSDB managers [  OK  ]
Nov 23 15:34:04 np0005532762 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 23 15:34:04 np0005532762 systemd[1]: Starting Open vSwitch...
Nov 23 15:34:04 np0005532762 systemd[1]: Finished Open vSwitch.
Nov 23 15:34:06 np0005532762 python3.9[47483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:34:08 np0005532762 python3.9[47635]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 15:34:10 np0005532762 kernel: SELinux:  Converting 2744 SID table entries...
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:34:10 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:34:11 np0005532762 python3.9[47790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:34:12 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 23 15:34:12 np0005532762 python3.9[47948]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:14 np0005532762 python3.9[48101]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:34:16 np0005532762 python3.9[48388]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:34:17 np0005532762 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:34:18 np0005532762 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:19 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:19 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:34:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:20 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:20 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:20 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:20 np0005532762 systemd[1]: run-r41905e1977c54505b91471cb03c3b8bf.service: Deactivated successfully.
Nov 23 15:34:21 np0005532762 python3.9[49012]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:34:21 np0005532762 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 15:34:21 np0005532762 systemd[1]: Stopped Network Manager Wait Online.
Nov 23 15:34:21 np0005532762 systemd[1]: Stopping Network Manager Wait Online...
Nov 23 15:34:21 np0005532762 systemd[1]: Stopping Network Manager...
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3874] caught SIGTERM, shutting down normally.
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3893] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3894] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3894] dhcp4 (eth0): state changed no lease
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3896] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:34:21 np0005532762 NetworkManager[7191]: <info>  [1763930061.3957] exiting (success)
Nov 23 15:34:21 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:34:21 np0005532762 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 15:34:21 np0005532762 systemd[1]: Stopped Network Manager.
Nov 23 15:34:21 np0005532762 systemd[1]: NetworkManager.service: Consumed 12.012s CPU time, 4.0M memory peak, read 0B from disk, written 45.0K to disk.
Nov 23 15:34:21 np0005532762 systemd[1]: Starting Network Manager...
Nov 23 15:34:21 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.4480] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6edcf464-8554-408a-ba56-0bae3cf8aec4)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.4481] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.4535] manager[0x555c7ef90090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 15:34:21 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 15:34:21 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5395] hostname: hostname: using hostnamed
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5399] hostname: static hostname changed from (none) to "compute-1"
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5404] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5408] manager[0x555c7ef90090]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5409] manager[0x555c7ef90090]: rfkill: WWAN hardware radio set enabled
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5427] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5435] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5435] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5436] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5436] manager: Networking is enabled by state file
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5438] settings: Loaded settings plugin: keyfile (internal)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5441] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5461] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5468] dhcp: init: Using DHCP client 'internal'
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5470] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5475] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5479] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5485] device (lo): Activation: starting connection 'lo' (170402d3-84eb-4bc9-a75c-092c5ddf07e9)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5491] device (eth0): carrier: link connected
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5495] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5500] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5500] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5506] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5512] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5517] device (eth1): carrier: link connected
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5521] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5525] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57) (indicated)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5526] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5530] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5536] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 15:34:21 np0005532762 systemd[1]: Started Network Manager.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5541] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5548] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5550] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5552] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5553] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5555] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5557] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5560] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5563] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5569] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5572] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5593] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5605] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5614] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5617] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5620] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5625] device (lo): Activation: successful, device activated.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5632] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 15:34:21 np0005532762 systemd[1]: Starting Network Manager Wait Online...
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5697] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5704] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5705] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5711] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5714] device (eth1): Activation: successful, device activated.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5732] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5733] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5736] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5740] device (eth0): Activation: successful, device activated.
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5745] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 15:34:21 np0005532762 NetworkManager[49021]: <info>  [1763930061.5747] manager: startup complete
Nov 23 15:34:21 np0005532762 systemd[1]: Finished Network Manager Wait Online.
Nov 23 15:34:22 np0005532762 python3.9[49241]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:34:26 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:34:26 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:34:27 np0005532762 systemd[1]: Reloading.
Nov 23 15:34:27 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:34:27 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:34:27 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:34:28 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:34:28 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:34:28 np0005532762 systemd[1]: run-r5c1f9044aa364421af8cd5482479367b.service: Deactivated successfully.
Nov 23 15:34:29 np0005532762 python3.9[49702]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:34:30 np0005532762 python3.9[49854]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:31 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:34:31 np0005532762 python3.9[50008]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:32 np0005532762 python3.9[50160]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:33 np0005532762 python3.9[50312]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:34 np0005532762 python3.9[50464]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:35 np0005532762 python3.9[50616]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:35 np0005532762 python3.9[50739]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930074.5595515-648-259711101036548/.source _original_basename=.0daqlenn follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:36 np0005532762 python3.9[50893]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:37 np0005532762 python3.9[51045]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 23 15:34:38 np0005532762 python3.9[51197]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:41 np0005532762 python3.9[51624]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 23 15:34:42 np0005532762 ansible-async_wrapper.py[51799]: Invoked with j595449502514 300 /home/zuul/.ansible/tmp/ansible-tmp-1763930081.4838538-846-47931802382160/AnsiballZ_edpm_os_net_config.py _
Nov 23 15:34:42 np0005532762 ansible-async_wrapper.py[51802]: Starting module and watcher
Nov 23 15:34:42 np0005532762 ansible-async_wrapper.py[51802]: Start watching 51803 (300)
Nov 23 15:34:42 np0005532762 ansible-async_wrapper.py[51803]: Start module (51803)
Nov 23 15:34:42 np0005532762 ansible-async_wrapper.py[51799]: Return async_wrapper task started.
Nov 23 15:34:42 np0005532762 python3.9[51804]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 23 15:34:43 np0005532762 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 23 15:34:43 np0005532762 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 23 15:34:43 np0005532762 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 23 15:34:43 np0005532762 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 23 15:34:43 np0005532762 kernel: cfg80211: failed to load regulatory.db
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5098] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5117] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5649] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5651] audit: op="connection-add" uuid="c988ce38-27c2-4d3c-85ec-06b32df62858" name="br-ex-br" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5667] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5668] audit: op="connection-add" uuid="8cc83dc2-9293-41a4-b95c-e9edf11e20ca" name="br-ex-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5680] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5682] audit: op="connection-add" uuid="f5c2ae14-da24-43ef-9318-3cf2c229c249" name="eth1-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5695] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5697] audit: op="connection-add" uuid="bf5464d3-55b8-4fc0-8fcf-6b47d314fb69" name="vlan20-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5710] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5711] audit: op="connection-add" uuid="4965df62-2909-48eb-a3a9-d8bd87b0cd5f" name="vlan21-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5723] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5725] audit: op="connection-add" uuid="cc36bd65-7fc5-43b8-9ec5-4e4b17c40a8f" name="vlan22-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5737] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5739] audit: op="connection-add" uuid="8486841e-2ac8-4483-9797-ca325237ecfa" name="vlan23-port" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5760] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5777] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5779] audit: op="connection-add" uuid="a72cb7f2-1249-4a6e-ba90-aca60299fcae" name="br-ex-if" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5815] audit: op="connection-update" uuid="c8f28de1-00ce-5ad5-b1e7-36e35b879f57" name="ci-private-network" args="ipv6.addresses,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ovs-external-ids.data,connection.controller,connection.slave-type,connection.master,connection.port-type,connection.timestamp,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.routes,ovs-interface.type" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5834] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5836] audit: op="connection-add" uuid="fe8b5918-fff8-49b1-94f9-0962986fea5d" name="vlan20-if" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5853] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5855] audit: op="connection-add" uuid="89237f34-1baf-4498-87f3-65d56f6285d0" name="vlan21-if" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5872] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5874] audit: op="connection-add" uuid="680d22b5-afb2-440f-8eb3-4c40fd54294d" name="vlan22-if" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5891] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5892] audit: op="connection-add" uuid="2c19b141-69af-44dc-9ab3-ce6717d58c14" name="vlan23-if" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5905] audit: op="connection-delete" uuid="b8d72197-27ea-3e22-9d94-94c7806ccb0f" name="Wired connection 1" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5917] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5928] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5932] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c988ce38-27c2-4d3c-85ec-06b32df62858)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5932] audit: op="connection-activate" uuid="c988ce38-27c2-4d3c-85ec-06b32df62858" name="br-ex-br" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5934] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5941] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5945] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8cc83dc2-9293-41a4-b95c-e9edf11e20ca)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5947] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5953] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5957] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f5c2ae14-da24-43ef-9318-3cf2c229c249)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5959] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.5965] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6010] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (bf5464d3-55b8-4fc0-8fcf-6b47d314fb69)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6012] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6021] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6024] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (4965df62-2909-48eb-a3a9-d8bd87b0cd5f)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6026] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6032] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6035] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (cc36bd65-7fc5-43b8-9ec5-4e4b17c40a8f)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6037] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6043] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6048] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8486841e-2ac8-4483-9797-ca325237ecfa)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6049] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6051] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6052] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6059] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6063] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6067] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a72cb7f2-1249-4a6e-ba90-aca60299fcae)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6068] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6071] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6072] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6073] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6074] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6087] device (eth1): disconnecting for new activation request.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6087] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6090] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6092] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6093] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6095] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6099] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6104] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fe8b5918-fff8-49b1-94f9-0962986fea5d)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6105] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6107] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6109] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6110] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6112] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6116] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6119] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (89237f34-1baf-4498-87f3-65d56f6285d0)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6120] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6123] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6124] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6125] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6127] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6131] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6134] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (680d22b5-afb2-440f-8eb3-4c40fd54294d)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6135] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6137] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6139] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6140] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6143] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6146] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6150] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (2c19b141-69af-44dc-9ab3-ce6717d58c14)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6151] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6153] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6155] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6156] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6159] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6170] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6172] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6174] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6175] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6181] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6185] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6188] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6191] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6192] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6196] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6199] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6202] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6204] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6207] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6211] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6215] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6217] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6223] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6227] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6231] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6233] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6238] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): canceled DHCP transaction
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6242] dhcp4 (eth0): state changed no lease
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6243] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6263] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51805 uid=0 result="fail" reason="Device is not activated"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6296] device (eth1): disconnecting for new activation request.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6296] audit: op="connection-activate" uuid="c8f28de1-00ce-5ad5-b1e7-36e35b879f57" name="ci-private-network" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6304] dhcp4 (eth0): state changed new lease, address=38.102.83.106
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6747] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 kernel: ovs-system: entered promiscuous mode
Nov 23 15:34:44 np0005532762 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6758] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6765] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6773] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 kernel: Timeout policy base is empty
Nov 23 15:34:44 np0005532762 systemd-udevd[51809]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6790] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6792] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 15:34:44 np0005532762 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6863] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6938] device (eth1): Activation: starting connection 'ci-private-network' (c8f28de1-00ce-5ad5-b1e7-36e35b879f57)
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6952] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6955] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6961] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6963] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6963] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6965] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6967] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6969] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6970] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6981] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6992] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.6998] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7003] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7010] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7014] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7022] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7026] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7033] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7038] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7043] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7046] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7051] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7056] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7062] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7068] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7074] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7111] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7114] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7122] device (eth1): Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 kernel: br-ex: entered promiscuous mode
Nov 23 15:34:44 np0005532762 systemd-udevd[51811]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:34:44 np0005532762 kernel: vlan22: entered promiscuous mode
Nov 23 15:34:44 np0005532762 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7276] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7289] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7304] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7308] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7313] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 kernel: vlan23: entered promiscuous mode
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7366] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7376] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 kernel: vlan20: entered promiscuous mode
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7394] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7396] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7401] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7444] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532762 kernel: vlan21: entered promiscuous mode
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7458] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7487] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7488] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7490] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7497] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7510] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7543] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7544] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7550] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7568] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7579] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7610] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7611] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 23 15:34:44 np0005532762 NetworkManager[49021]: <info>  [1763930084.7616] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 23 15:34:45 np0005532762 NetworkManager[49021]: <info>  [1763930085.8800] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.0436] checkpoint[0x555c7ef66950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.0438] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 python3.9[52163]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=status _async_dir=/root/.ansible_async
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.3486] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.3499] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.5964] audit: op="networking-control" arg="global-dns-configuration" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.6005] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.6050] audit: op="networking-control" arg="global-dns-configuration" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.6079] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.7664] checkpoint[0x555c7ef66a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 23 15:34:46 np0005532762 NetworkManager[49021]: <info>  [1763930086.7671] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51805 uid=0 result="success"
Nov 23 15:34:46 np0005532762 ansible-async_wrapper.py[51803]: Module complete (51803)
Nov 23 15:34:47 np0005532762 ansible-async_wrapper.py[51802]: Done in kid B.
Nov 23 15:34:49 np0005532762 python3.9[52268]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=status _async_dir=/root/.ansible_async
Nov 23 15:34:50 np0005532762 python3.9[52368]: ansible-ansible.legacy.async_status Invoked with jid=j595449502514.51799 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 15:34:51 np0005532762 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 15:34:51 np0005532762 python3.9[52522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:52 np0005532762 python3.9[52645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930091.3771622-927-246569029256638/.source.returncode _original_basename=._9la4h4r follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:53 np0005532762 python3.9[52798]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:34:53 np0005532762 python3.9[52923]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930092.914343-975-134779940725131/.source.cfg _original_basename=.e015nhze follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:34:55 np0005532762 python3.9[53075]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:34:55 np0005532762 systemd[1]: Reloading Network Manager...
Nov 23 15:34:55 np0005532762 NetworkManager[49021]: <info>  [1763930095.3092] audit: op="reload" arg="0" pid=53079 uid=0 result="success"
Nov 23 15:34:55 np0005532762 NetworkManager[49021]: <info>  [1763930095.3100] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 23 15:34:55 np0005532762 systemd[1]: Reloaded Network Manager.
Nov 23 15:34:55 np0005532762 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Nov 23 15:34:55 np0005532762 systemd[1]: session-11.scope: Deactivated successfully.
Nov 23 15:34:55 np0005532762 systemd[1]: session-11.scope: Consumed 48.229s CPU time.
Nov 23 15:34:55 np0005532762 systemd-logind[793]: Removed session 11.
Nov 23 15:35:01 np0005532762 systemd-logind[793]: New session 12 of user zuul.
Nov 23 15:35:01 np0005532762 systemd[1]: Started Session 12 of User zuul.
Nov 23 15:35:02 np0005532762 python3.9[53263]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:03 np0005532762 python3.9[53418]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:05 np0005532762 python3.9[53611]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:05 np0005532762 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 15:35:05 np0005532762 systemd[1]: session-12.scope: Deactivated successfully.
Nov 23 15:35:05 np0005532762 systemd[1]: session-12.scope: Consumed 2.280s CPU time.
Nov 23 15:35:05 np0005532762 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Nov 23 15:35:05 np0005532762 systemd-logind[793]: Removed session 12.
Nov 23 15:35:11 np0005532762 systemd-logind[793]: New session 13 of user zuul.
Nov 23 15:35:11 np0005532762 systemd[1]: Started Session 13 of User zuul.
Nov 23 15:35:12 np0005532762 python3.9[53795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:13 np0005532762 python3.9[53950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:14 np0005532762 python3.9[54106]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:15 np0005532762 python3.9[54190]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:17 np0005532762 python3.9[54344]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:19 np0005532762 python3.9[54539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:19 np0005532762 python3.9[54691]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:19 np0005532762 systemd[1]: var-lib-containers-storage-overlay-compat2235841369-merged.mount: Deactivated successfully.
Nov 23 15:35:19 np0005532762 podman[54692]: 2025-11-23 20:35:19.997418788 +0000 UTC m=+0.096153540 system refresh
Nov 23 15:35:20 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:35:21 np0005532762 python3.9[54854]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:21 np0005532762 python3.9[54979]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930120.4056346-198-119984742398673/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ac1c67868e36c2960d9b69f46efe99c8dc349861 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:22 np0005532762 python3.9[55131]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:23 np0005532762 python3.9[55254]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930122.0771065-243-154514973180175/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:24 np0005532762 python3.9[55406]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:25 np0005532762 python3.9[55558]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:25 np0005532762 python3.9[55710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:26 np0005532762 python3.9[55864]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:35:27 np0005532762 python3.9[56016]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:30 np0005532762 python3.9[56169]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:35:31 np0005532762 python3.9[56323]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:35:32 np0005532762 python3.9[56475]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:35:32 np0005532762 python3.9[56627]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:35:34 np0005532762 python3.9[56780]: ansible-service_facts Invoked
Nov 23 15:35:34 np0005532762 network[56797]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:35:34 np0005532762 network[56798]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:35:34 np0005532762 network[56799]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:35:40 np0005532762 python3.9[57253]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:35:42 np0005532762 python3.9[57406]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 15:35:44 np0005532762 python3.9[57558]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:45 np0005532762 python3.9[57683]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930143.909261-675-74030862214967/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:46 np0005532762 python3.9[57837]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:35:46 np0005532762 python3.9[57962]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930145.5873191-721-83237459869671/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:48 np0005532762 python3.9[58116]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:35:50 np0005532762 python3.9[58270]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:51 np0005532762 python3.9[58354]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:35:53 np0005532762 python3.9[58508]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:35:54 np0005532762 python3.9[58592]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:35:54 np0005532762 chronyd[808]: chronyd exiting
Nov 23 15:35:54 np0005532762 systemd[1]: Stopping NTP client/server...
Nov 23 15:35:54 np0005532762 systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 15:35:54 np0005532762 systemd[1]: Stopped NTP client/server.
Nov 23 15:35:54 np0005532762 systemd[1]: Starting NTP client/server...
Nov 23 15:35:54 np0005532762 chronyd[58600]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 15:35:54 np0005532762 chronyd[58600]: Frequency -25.976 +/- 0.404 ppm read from /var/lib/chrony/drift
Nov 23 15:35:54 np0005532762 chronyd[58600]: Loaded seccomp filter (level 2)
Nov 23 15:35:54 np0005532762 systemd[1]: Started NTP client/server.
Nov 23 15:35:55 np0005532762 systemd[1]: session-13.scope: Deactivated successfully.
Nov 23 15:35:55 np0005532762 systemd[1]: session-13.scope: Consumed 24.782s CPU time.
Nov 23 15:35:55 np0005532762 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Nov 23 15:35:55 np0005532762 systemd-logind[793]: Removed session 13.
Nov 23 15:36:00 np0005532762 systemd-logind[793]: New session 14 of user zuul.
Nov 23 15:36:01 np0005532762 systemd[1]: Started Session 14 of User zuul.
Nov 23 15:36:01 np0005532762 python3.9[58781]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:02 np0005532762 python3.9[58933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:03 np0005532762 python3.9[59056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930162.269778-63-249838362816358/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:04 np0005532762 systemd[1]: session-14.scope: Deactivated successfully.
Nov 23 15:36:04 np0005532762 systemd[1]: session-14.scope: Consumed 1.517s CPU time.
Nov 23 15:36:04 np0005532762 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Nov 23 15:36:04 np0005532762 systemd-logind[793]: Removed session 14.
Nov 23 15:36:09 np0005532762 systemd-logind[793]: New session 15 of user zuul.
Nov 23 15:36:09 np0005532762 systemd[1]: Started Session 15 of User zuul.
Nov 23 15:36:10 np0005532762 python3.9[59236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:36:11 np0005532762 python3.9[59392]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:12 np0005532762 python3.9[59567]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:13 np0005532762 python3.9[59690]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763930171.9860466-84-133190198896731/.source.json _original_basename=.ufs97lv7 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:14 np0005532762 python3.9[59842]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:14 np0005532762 python3.9[59965]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930174.028121-153-162254744154337/.source _original_basename=.a0_52iof follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:16 np0005532762 python3.9[60117]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:16 np0005532762 python3.9[60269]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:17 np0005532762 python3.9[60392]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930176.3679996-225-226355941565623/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:17 np0005532762 python3.9[60544]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:18 np0005532762 python3.9[60667]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763930177.4729695-225-263150874099607/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:36:19 np0005532762 python3.9[60819]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:20 np0005532762 python3.9[60971]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:21 np0005532762 python3.9[61094]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930180.1015997-336-137795132502481/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:22 np0005532762 python3.9[61246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:22 np0005532762 python3.9[61369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930181.6746585-381-236475053013401/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:24 np0005532762 python3.9[61521]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:24 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:24 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:24 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:24 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:24 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:24 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:24 np0005532762 systemd[1]: Starting EDPM Container Shutdown...
Nov 23 15:36:24 np0005532762 systemd[1]: Finished EDPM Container Shutdown.
Nov 23 15:36:25 np0005532762 python3.9[61748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:25 np0005532762 python3.9[61871]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930184.9895394-450-224800953782196/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:27 np0005532762 python3.9[62023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:27 np0005532762 python3.9[62146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930186.6092234-495-94860639556863/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:28 np0005532762 python3.9[62298]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:28 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:28 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:28 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:28 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:28 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:28 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:29 np0005532762 systemd[1]: Starting Create netns directory...
Nov 23 15:36:29 np0005532762 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:36:29 np0005532762 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:36:29 np0005532762 systemd[1]: Finished Create netns directory.
Nov 23 15:36:30 np0005532762 python3.9[62527]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:36:30 np0005532762 network[62544]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:36:30 np0005532762 network[62545]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:36:30 np0005532762 network[62546]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:36:36 np0005532762 python3.9[62808]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:36 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:36 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:36 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:36 np0005532762 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 23 15:36:36 np0005532762 iptables.init[62850]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 23 15:36:36 np0005532762 iptables.init[62850]: iptables: Flushing firewall rules: [  OK  ]
Nov 23 15:36:36 np0005532762 systemd[1]: iptables.service: Deactivated successfully.
Nov 23 15:36:36 np0005532762 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 23 15:36:38 np0005532762 python3.9[63046]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:39 np0005532762 python3.9[63200]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:36:39 np0005532762 systemd[1]: Reloading.
Nov 23 15:36:39 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:36:39 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:36:39 np0005532762 systemd[1]: Starting Netfilter Tables...
Nov 23 15:36:39 np0005532762 systemd[1]: Finished Netfilter Tables.
Nov 23 15:36:40 np0005532762 python3.9[63392]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:41 np0005532762 python3.9[63545]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:42 np0005532762 python3.9[63671]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930201.156135-702-179355839405064/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:43 np0005532762 python3.9[63825]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:36:43 np0005532762 systemd[1]: Reloading OpenSSH server daemon...
Nov 23 15:36:43 np0005532762 systemd[1]: Reloaded OpenSSH server daemon.
Nov 23 15:36:44 np0005532762 python3.9[63981]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:44 np0005532762 python3.9[64133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:45 np0005532762 python3.9[64256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930204.4876158-795-156621429705602/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:46 np0005532762 python3.9[64408]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:36:46 np0005532762 systemd[1]: Starting Time & Date Service...
Nov 23 15:36:46 np0005532762 systemd[1]: Started Time & Date Service.
Nov 23 15:36:47 np0005532762 python3.9[64564]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:48 np0005532762 python3.9[64716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:49 np0005532762 python3.9[64839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930208.308962-901-189564753156378/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:50 np0005532762 python3.9[64991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:50 np0005532762 python3.9[65114]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930209.883207-946-206658909949110/.source.yaml _original_basename=.tle0i_zh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:51 np0005532762 python3.9[65266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:52 np0005532762 python3.9[65389]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930211.3898401-990-162395279395576/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:53 np0005532762 python3.9[65541]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:54 np0005532762 python3.9[65696]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:36:55 np0005532762 python3[65849]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:36:56 np0005532762 python3.9[66001]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:56 np0005532762 python3.9[66124]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930215.6492622-1107-104058532410110/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:57 np0005532762 python3.9[66276]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:58 np0005532762 python3.9[66399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930217.1473029-1152-229716375945703/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:36:59 np0005532762 python3.9[66551]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:36:59 np0005532762 python3.9[66674]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930218.7269976-1197-74018833198880/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:00 np0005532762 python3.9[66826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:37:01 np0005532762 python3.9[66949]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930220.3910675-1242-86263106829677/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:02 np0005532762 python3.9[67101]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:37:03 np0005532762 python3.9[67224]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930222.0214162-1287-184737454912534/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:04 np0005532762 python3.9[67376]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:05 np0005532762 python3.9[67528]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:06 np0005532762 python3.9[67687]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:07 np0005532762 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:08 np0005532762 python3.9[67992]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:09 np0005532762 python3.9[68146]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:37:10 np0005532762 python3.9[68299]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:37:10 np0005532762 systemd[1]: session-15.scope: Deactivated successfully.
Nov 23 15:37:10 np0005532762 systemd[1]: session-15.scope: Consumed 32.976s CPU time.
Nov 23 15:37:10 np0005532762 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Nov 23 15:37:10 np0005532762 systemd-logind[793]: Removed session 15.
Nov 23 15:37:15 np0005532762 systemd-logind[793]: New session 16 of user zuul.
Nov 23 15:37:15 np0005532762 systemd[1]: Started Session 16 of User zuul.
Nov 23 15:37:16 np0005532762 python3.9[68480]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 15:37:17 np0005532762 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:37:17 np0005532762 python3.9[68634]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:18 np0005532762 python3.9[68786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:19 np0005532762 python3.9[68938]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=#012 create=True mode=0644 path=/tmp/ansible.pb_ans4v state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:20 np0005532762 python3.9[69090]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pb_ans4v' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:21 np0005532762 python3.9[69244]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pb_ans4v state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:22 np0005532762 systemd[1]: session-16.scope: Deactivated successfully.
Nov 23 15:37:22 np0005532762 systemd[1]: session-16.scope: Consumed 3.363s CPU time.
Nov 23 15:37:22 np0005532762 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Nov 23 15:37:22 np0005532762 systemd-logind[793]: Removed session 16.
Nov 23 15:37:27 np0005532762 systemd-logind[793]: New session 17 of user zuul.
Nov 23 15:37:27 np0005532762 systemd[1]: Started Session 17 of User zuul.
Nov 23 15:37:28 np0005532762 python3.9[69423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:30 np0005532762 python3.9[69579]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:37:31 np0005532762 python3.9[69733]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:37:32 np0005532762 python3.9[69888]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:33 np0005532762 python3.9[70041]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:34 np0005532762 python3.9[70195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:35 np0005532762 python3.9[70350]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:37:35 np0005532762 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Nov 23 15:37:35 np0005532762 systemd[1]: session-17.scope: Deactivated successfully.
Nov 23 15:37:35 np0005532762 systemd[1]: session-17.scope: Consumed 4.186s CPU time.
Nov 23 15:37:35 np0005532762 systemd-logind[793]: Removed session 17.
Nov 23 15:37:41 np0005532762 systemd-logind[793]: New session 18 of user zuul.
Nov 23 15:37:41 np0005532762 systemd[1]: Started Session 18 of User zuul.
Nov 23 15:37:42 np0005532762 python3.9[70528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:37:43 np0005532762 python3.9[70684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:37:44 np0005532762 python3.9[70770]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:37:46 np0005532762 python3.9[70921]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:37:47 np0005532762 python3.9[71072]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:37:48 np0005532762 python3.9[71222]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:48 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:37:48 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:37:49 np0005532762 python3.9[71373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:37:49 np0005532762 systemd[1]: session-18.scope: Deactivated successfully.
Nov 23 15:37:49 np0005532762 systemd[1]: session-18.scope: Consumed 6.009s CPU time.
Nov 23 15:37:49 np0005532762 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Nov 23 15:37:49 np0005532762 systemd-logind[793]: Removed session 18.
Nov 23 15:37:58 np0005532762 systemd-logind[793]: New session 19 of user zuul.
Nov 23 15:37:58 np0005532762 systemd[1]: Started Session 19 of User zuul.
Nov 23 15:38:04 np0005532762 chronyd[58600]: Selected source 174.138.193.90 (pool.ntp.org)
Nov 23 15:38:04 np0005532762 python3[72141]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:38:06 np0005532762 python3[72236]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 15:38:08 np0005532762 python3[72263]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 15:38:08 np0005532762 python3[72289]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:08 np0005532762 kernel: loop: module loaded
Nov 23 15:38:08 np0005532762 kernel: loop3: detected capacity change from 0 to 41943040
Nov 23 15:38:09 np0005532762 python3[72324]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:38:09 np0005532762 lvm[72327]: PV /dev/loop3 not used.
Nov 23 15:38:09 np0005532762 lvm[72329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:38:09 np0005532762 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 23 15:38:09 np0005532762 lvm[72332]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 23 15:38:09 np0005532762 lvm[72339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:38:09 np0005532762 lvm[72339]: VG ceph_vg0 finished
Nov 23 15:38:09 np0005532762 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 23 15:38:10 np0005532762 python3[72417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 15:38:10 np0005532762 python3[72490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763930289.8710515-36961-72011168147632/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:38:11 np0005532762 python3[72544]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:38:11 np0005532762 systemd[1]: Reloading.
Nov 23 15:38:11 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:38:11 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:38:11 np0005532762 systemd[1]: Starting Ceph OSD losetup...
Nov 23 15:38:11 np0005532762 bash[72584]: /dev/loop3: [64513]:4328000 (/var/lib/ceph-osd-0.img)
Nov 23 15:38:11 np0005532762 systemd[1]: Finished Ceph OSD losetup.
Nov 23 15:38:11 np0005532762 lvm[72585]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:38:11 np0005532762 lvm[72585]: VG ceph_vg0 finished
Nov 23 15:38:14 np0005532762 python3[72609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:39:42 np0005532762 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 15:39:42 np0005532762 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 15:39:42 np0005532762 systemd-logind[793]: New session 20 of user ceph-admin.
Nov 23 15:39:42 np0005532762 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 15:39:43 np0005532762 systemd[1]: Starting User Manager for UID 42477...
Nov 23 15:39:43 np0005532762 systemd[72671]: Queued start job for default target Main User Target.
Nov 23 15:39:43 np0005532762 systemd[72671]: Created slice User Application Slice.
Nov 23 15:39:43 np0005532762 systemd[72671]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:39:43 np0005532762 systemd[72671]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:39:43 np0005532762 systemd[72671]: Reached target Paths.
Nov 23 15:39:43 np0005532762 systemd[72671]: Reached target Timers.
Nov 23 15:39:43 np0005532762 systemd[72671]: Starting D-Bus User Message Bus Socket...
Nov 23 15:39:43 np0005532762 systemd[72671]: Starting Create User's Volatile Files and Directories...
Nov 23 15:39:43 np0005532762 systemd[72671]: Finished Create User's Volatile Files and Directories.
Nov 23 15:39:43 np0005532762 systemd[72671]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:39:43 np0005532762 systemd[72671]: Reached target Sockets.
Nov 23 15:39:43 np0005532762 systemd[72671]: Reached target Basic System.
Nov 23 15:39:43 np0005532762 systemd[72671]: Reached target Main User Target.
Nov 23 15:39:43 np0005532762 systemd[72671]: Startup finished in 121ms.
Nov 23 15:39:43 np0005532762 systemd[1]: Started User Manager for UID 42477.
Nov 23 15:39:43 np0005532762 systemd[1]: Started Session 20 of User ceph-admin.
Nov 23 15:39:43 np0005532762 systemd-logind[793]: New session 22 of user ceph-admin.
Nov 23 15:39:43 np0005532762 systemd[1]: Started Session 22 of User ceph-admin.
Nov 23 15:39:43 np0005532762 systemd-logind[793]: New session 23 of user ceph-admin.
Nov 23 15:39:43 np0005532762 systemd[1]: Started Session 23 of User ceph-admin.
Nov 23 15:39:43 np0005532762 systemd-logind[793]: New session 24 of user ceph-admin.
Nov 23 15:39:43 np0005532762 systemd[1]: Started Session 24 of User ceph-admin.
Nov 23 15:39:44 np0005532762 systemd-logind[793]: New session 25 of user ceph-admin.
Nov 23 15:39:44 np0005532762 systemd[1]: Started Session 25 of User ceph-admin.
Nov 23 15:39:44 np0005532762 systemd-logind[793]: New session 26 of user ceph-admin.
Nov 23 15:39:44 np0005532762 systemd[1]: Started Session 26 of User ceph-admin.
Nov 23 15:39:44 np0005532762 systemd-logind[793]: New session 27 of user ceph-admin.
Nov 23 15:39:44 np0005532762 systemd[1]: Started Session 27 of User ceph-admin.
Nov 23 15:39:45 np0005532762 systemd-logind[793]: New session 28 of user ceph-admin.
Nov 23 15:39:45 np0005532762 systemd[1]: Started Session 28 of User ceph-admin.
Nov 23 15:39:45 np0005532762 systemd-logind[793]: New session 29 of user ceph-admin.
Nov 23 15:39:45 np0005532762 systemd[1]: Started Session 29 of User ceph-admin.
Nov 23 15:39:45 np0005532762 systemd-logind[793]: New session 30 of user ceph-admin.
Nov 23 15:39:45 np0005532762 systemd[1]: Started Session 30 of User ceph-admin.
Nov 23 15:39:46 np0005532762 systemd-logind[793]: New session 31 of user ceph-admin.
Nov 23 15:39:47 np0005532762 systemd[1]: Started Session 31 of User ceph-admin.
Nov 23 15:39:47 np0005532762 systemd-logind[793]: New session 32 of user ceph-admin.
Nov 23 15:39:47 np0005532762 systemd[1]: Started Session 32 of User ceph-admin.
Nov 23 15:39:47 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:48 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:48 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:48 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:49 np0005532762 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73245 (sysctl)
Nov 23 15:39:49 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:49 np0005532762 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 23 15:39:49 np0005532762 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 23 15:39:50 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:50 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:51 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:39:53 np0005532762 systemd[1]: var-lib-containers-storage-overlay-compat3555228885-lower\x2dmapped.mount: Deactivated successfully.
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.472716224 +0000 UTC m=+20.399341405 container create e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 23 15:40:11 np0005532762 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 23 15:40:11 np0005532762 systemd[1]: Started libpod-conmon-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope.
Nov 23 15:40:11 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.457017035 +0000 UTC m=+20.383642236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.572737384 +0000 UTC m=+20.499362585 container init e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.579764202 +0000 UTC m=+20.506389383 container start e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:11 np0005532762 practical_hodgkin[73488]: 167 167
Nov 23 15:40:11 np0005532762 systemd[1]: libpod-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope: Deactivated successfully.
Nov 23 15:40:11 np0005532762 conmon[73488]: conmon e9bd2a1d5f06d9cc774a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope/container/memory.events
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.58682564 +0000 UTC m=+20.513450821 container attach e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.587673843 +0000 UTC m=+20.514299024 container died e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:11 np0005532762 systemd[1]: var-lib-containers-storage-overlay-d85d9378ef6de38413897711c5ab6ee1a8b2ad895100c4fa60964ce7b1213882-merged.mount: Deactivated successfully.
Nov 23 15:40:11 np0005532762 podman[73422]: 2025-11-23 20:40:11.637594285 +0000 UTC m=+20.564219466 container remove e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:11 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:11 np0005532762 systemd[1]: libpod-conmon-e9bd2a1d5f06d9cc774a342e492c60e75f4c52d1d15afcd869cf762a22a8ad60.scope: Deactivated successfully.
Nov 23 15:40:11 np0005532762 podman[73513]: 2025-11-23 20:40:11.764333368 +0000 UTC m=+0.020077328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:11 np0005532762 podman[73513]: 2025-11-23 20:40:11.982734397 +0000 UTC m=+0.238478327 container create e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:14 np0005532762 systemd[1]: Started libpod-conmon-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope.
Nov 23 15:40:14 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:14 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:14 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:14 np0005532762 podman[73513]: 2025-11-23 20:40:14.2599657 +0000 UTC m=+2.515709650 container init e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 23 15:40:14 np0005532762 podman[73513]: 2025-11-23 20:40:14.265462608 +0000 UTC m=+2.521206538 container start e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:14 np0005532762 podman[73513]: 2025-11-23 20:40:14.274854979 +0000 UTC m=+2.530598919 container attach e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 23 15:40:14 np0005532762 friendly_colden[73532]: [
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:    {
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "available": false,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "being_replaced": false,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "ceph_device_lvm": false,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "lsm_data": {},
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "lvs": [],
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "path": "/dev/sr0",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "rejected_reasons": [
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "Has a FileSystem",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "Insufficient space (<5GB)"
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        ],
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        "sys_api": {
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "actuators": null,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "device_nodes": [
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:                "sr0"
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            ],
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "devname": "sr0",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "human_readable_size": "482.00 KB",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "id_bus": "ata",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "model": "QEMU DVD-ROM",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "nr_requests": "2",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "parent": "/dev/sr0",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "partitions": {},
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "path": "/dev/sr0",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "removable": "1",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "rev": "2.5+",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "ro": "0",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "rotational": "1",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "sas_address": "",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "sas_device_handle": "",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "scheduler_mode": "mq-deadline",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "sectors": 0,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "sectorsize": "2048",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "size": 493568.0,
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "support_discard": "2048",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "type": "disk",
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:            "vendor": "QEMU"
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:        }
Nov 23 15:40:14 np0005532762 friendly_colden[73532]:    }
Nov 23 15:40:14 np0005532762 friendly_colden[73532]: ]
Nov 23 15:40:14 np0005532762 systemd[1]: libpod-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope: Deactivated successfully.
Nov 23 15:40:14 np0005532762 podman[73513]: 2025-11-23 20:40:14.945974048 +0000 UTC m=+3.201718048 container died e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:15 np0005532762 systemd[1]: var-lib-containers-storage-overlay-f786460431bb502e9d36243a61847bf05f213d39ea7ca231331cbd9bafda844a-merged.mount: Deactivated successfully.
Nov 23 15:40:15 np0005532762 podman[73513]: 2025-11-23 20:40:15.317852513 +0000 UTC m=+3.573596443 container remove e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_colden, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:15 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:15 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:15 np0005532762 systemd[1]: libpod-conmon-e85974135ca9020bf55c9e58d5888f46504a4f0cfc6bf69b23368faf886bd81b.scope: Deactivated successfully.
Nov 23 15:40:18 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:18 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.192076019 +0000 UTC m=+0.042045181 container create 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:18 np0005532762 systemd[1]: Started libpod-conmon-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope.
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.172698272 +0000 UTC m=+0.022667474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:18 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.414630589 +0000 UTC m=+0.264599761 container init 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.4226567 +0000 UTC m=+0.272625862 container start 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.427643823 +0000 UTC m=+0.277613005 container attach 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:40:18 np0005532762 competent_solomon[75416]: 167 167
Nov 23 15:40:18 np0005532762 systemd[1]: libpod-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope: Deactivated successfully.
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.429838206 +0000 UTC m=+0.279807368 container died 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:18 np0005532762 podman[75399]: 2025-11-23 20:40:18.571331375 +0000 UTC m=+0.421300537 container remove 835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_solomon, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:18 np0005532762 systemd[1]: libpod-conmon-835d471630931944b7f9bab88b5da2fbefc6a2de406072d30b4de3539d648391.scope: Deactivated successfully.
Nov 23 15:40:18 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:18 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:18 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:18 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:19 np0005532762 systemd[1]: Reached target All Ceph clusters and services.
Nov 23 15:40:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:19 np0005532762 systemd[1]: Reached target Ceph cluster 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:40:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:19 np0005532762 systemd[1]: Created slice Slice /system/ceph-03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:40:19 np0005532762 systemd[1]: Reached target System Time Set.
Nov 23 15:40:19 np0005532762 systemd[1]: Reached target System Time Synchronized.
Nov 23 15:40:19 np0005532762 systemd[1]: Starting Ceph crash.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:40:20 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:20 np0005532762 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 15:40:20 np0005532762 podman[75671]: 2025-11-23 20:40:20.214765697 +0000 UTC m=+0.036587043 container create e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Nov 23 15:40:20 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:20 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:20 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b95c3c65c5acefb31fd3da1240eb22d79c78a1838c3103e9fbae9f77b2cc08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:20 np0005532762 podman[75671]: 2025-11-23 20:40:20.26494338 +0000 UTC m=+0.086764746 container init e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:40:20 np0005532762 podman[75671]: 2025-11-23 20:40:20.269919903 +0000 UTC m=+0.091741249 container start e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 15:40:20 np0005532762 bash[75671]: e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008
Nov 23 15:40:20 np0005532762 podman[75671]: 2025-11-23 20:40:20.196070519 +0000 UTC m=+0.017891885 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:20 np0005532762 systemd[1]: Started Ceph crash.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.407+0000 7fcd249a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.407+0000 7fcd249a7640 -1 AuthRegistry(0x7fcd1c0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.408+0000 7fcd249a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.408+0000 7fcd249a7640 -1 AuthRegistry(0x7fcd249a5ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.409+0000 7fcd2271c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: 2025-11-23T20:40:20.409+0000 7fcd249a7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 23 15:40:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1[75686]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.842229841 +0000 UTC m=+0.038139488 container create 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:20 np0005532762 systemd[1]: Started libpod-conmon-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope.
Nov 23 15:40:20 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.825487289 +0000 UTC m=+0.021396966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.926822964 +0000 UTC m=+0.122732641 container init 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.935568765 +0000 UTC m=+0.131478402 container start 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.939653953 +0000 UTC m=+0.135563650 container attach 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:20 np0005532762 quizzical_brattain[75808]: 167 167
Nov 23 15:40:20 np0005532762 systemd[1]: libpod-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope: Deactivated successfully.
Nov 23 15:40:20 np0005532762 podman[75792]: 2025-11-23 20:40:20.942716071 +0000 UTC m=+0.138625728 container died 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:21 np0005532762 systemd[1]: var-lib-containers-storage-overlay-841acbcb219eb790486c9d18f23e56dc89726b7081af75c6be34bac8e39c9a68-merged.mount: Deactivated successfully.
Nov 23 15:40:21 np0005532762 podman[75792]: 2025-11-23 20:40:21.081838341 +0000 UTC m=+0.277747998 container remove 8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:40:21 np0005532762 systemd[1]: libpod-conmon-8f5eac7277f3d9284b82f4f7f675e851f9544acf9047f4672bcfb3bbfa186100.scope: Deactivated successfully.
Nov 23 15:40:21 np0005532762 podman[75831]: 2025-11-23 20:40:21.268051926 +0000 UTC m=+0.047682452 container create dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 15:40:21 np0005532762 systemd[1]: Started libpod-conmon-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope.
Nov 23 15:40:21 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:21 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532762 podman[75831]: 2025-11-23 20:40:21.242689577 +0000 UTC m=+0.022320123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:21 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:21 np0005532762 podman[75831]: 2025-11-23 20:40:21.347054349 +0000 UTC m=+0.126684915 container init dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:40:21 np0005532762 podman[75831]: 2025-11-23 20:40:21.358497148 +0000 UTC m=+0.138127664 container start dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:21 np0005532762 podman[75831]: 2025-11-23 20:40:21.361992499 +0000 UTC m=+0.141623045 container attach dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:21 np0005532762 affectionate_mahavira[75847]: --> passed data devices: 0 physical, 1 LVM
Nov 23 15:40:21 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:21 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:21 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9775703-f092-47d3-b1e4-23e694631322
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 23 15:40:22 np0005532762 lvm[75908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:22 np0005532762 lvm[75908]: VG ceph_vg0 finished
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: stderr: got monmap epoch 1
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: --> Creating keyring file for osd.0
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 23 15:40:22 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f9775703-f092-47d3-b1e4-23e694631322 --setuser ceph --setgroup ceph
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: stderr: 2025-11-23T20:40:22.964+0000 7ff9bc9eb740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: stderr: 2025-11-23T20:40:23.233+0000 7ff9bc9eb740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 15:40:26 np0005532762 affectionate_mahavira[75847]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 23 15:40:26 np0005532762 systemd[1]: libpod-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Deactivated successfully.
Nov 23 15:40:26 np0005532762 systemd[1]: libpod-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Consumed 1.937s CPU time.
Nov 23 15:40:26 np0005532762 podman[76832]: 2025-11-23 20:40:26.943134139 +0000 UTC m=+0.024596929 container died dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:26 np0005532762 systemd[1]: var-lib-containers-storage-overlay-7d19f858e30e97909a553f27a18d198ce7376239986c3a5a8495e2fe107bbde5-merged.mount: Deactivated successfully.
Nov 23 15:40:27 np0005532762 podman[76832]: 2025-11-23 20:40:27.010799505 +0000 UTC m=+0.092262285 container remove dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 23 15:40:27 np0005532762 systemd[1]: libpod-conmon-dba01039bbca5f83b306f3e6f65f6688b3441a2056a3bd0842f7b4ebce97b2a4.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.586387797 +0000 UTC m=+0.057963247 container create d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:40:27 np0005532762 systemd[1]: Started libpod-conmon-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope.
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.557193627 +0000 UTC m=+0.028769157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:27 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.676149418 +0000 UTC m=+0.147724868 container init d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.684063836 +0000 UTC m=+0.155639286 container start d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 23 15:40:27 np0005532762 infallible_burnell[76954]: 167 167
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.689698828 +0000 UTC m=+0.161274308 container attach d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:27 np0005532762 systemd[1]: libpod-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.69077637 +0000 UTC m=+0.162351820 container died d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:27 np0005532762 systemd[1]: var-lib-containers-storage-overlay-a7d6d9e608b5ab2711a2ea652747770a7202c622129a5594dd6d701780735ff1-merged.mount: Deactivated successfully.
Nov 23 15:40:27 np0005532762 podman[76938]: 2025-11-23 20:40:27.731077818 +0000 UTC m=+0.202653268 container remove d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Nov 23 15:40:27 np0005532762 systemd[1]: libpod-conmon-d38c0eb54a98b27d67e04df885cdcf78034dd0999650a4c0288e78d85bfa3f05.scope: Deactivated successfully.
Nov 23 15:40:27 np0005532762 podman[76977]: 2025-11-23 20:40:27.885635913 +0000 UTC m=+0.048939298 container create 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 23 15:40:27 np0005532762 systemd[1]: Started libpod-conmon-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope.
Nov 23 15:40:27 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:27 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:27 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:27 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:27 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:27 np0005532762 podman[76977]: 2025-11-23 20:40:27.861916491 +0000 UTC m=+0.025219926 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:27 np0005532762 podman[76977]: 2025-11-23 20:40:27.962527695 +0000 UTC m=+0.125831100 container init 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:27 np0005532762 podman[76977]: 2025-11-23 20:40:27.969826594 +0000 UTC m=+0.133129999 container start 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:40:27 np0005532762 podman[76977]: 2025-11-23 20:40:27.974322603 +0000 UTC m=+0.137625988 container attach 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]: {
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:    "0": [
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:        {
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "devices": [
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "/dev/loop3"
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            ],
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "lv_name": "ceph_lv0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "lv_size": "21470642176",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=03808be8-ae4a-5548-82e6-4a294f1bc627,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f9775703-f092-47d3-b1e4-23e694631322,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "lv_uuid": "ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "name": "ceph_lv0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "tags": {
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.block_uuid": "ioaAYR-gFzA-A11a-ddiv-8k6F-N5qc-RuEB9j",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.cephx_lockbox_secret": "",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.cluster_fsid": "03808be8-ae4a-5548-82e6-4a294f1bc627",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.cluster_name": "ceph",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.crush_device_class": "",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.encrypted": "0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.osd_fsid": "f9775703-f092-47d3-b1e4-23e694631322",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.osd_id": "0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.type": "block",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.vdo": "0",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:                "ceph.with_tpm": "0"
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            },
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "type": "block",
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:            "vg_name": "ceph_vg0"
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:        }
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]:    ]
Nov 23 15:40:28 np0005532762 tender_aryabhata[76993]: }
Nov 23 15:40:28 np0005532762 systemd[1]: libpod-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope: Deactivated successfully.
Nov 23 15:40:28 np0005532762 podman[76977]: 2025-11-23 20:40:28.249178997 +0000 UTC m=+0.412482382 container died 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 23 15:40:28 np0005532762 systemd[1]: var-lib-containers-storage-overlay-c7321642969a322c229c3dd76362bb360881a65d749dcf88d0648864130c0ac1-merged.mount: Deactivated successfully.
Nov 23 15:40:28 np0005532762 podman[76977]: 2025-11-23 20:40:28.316552786 +0000 UTC m=+0.479856171 container remove 447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_aryabhata, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:28 np0005532762 systemd[1]: libpod-conmon-447f99d2af739880bbbdced1fce3ac57387decd2167435e8748dd6ad4027344b.scope: Deactivated successfully.
Nov 23 15:40:28 np0005532762 podman[77104]: 2025-11-23 20:40:28.86299049 +0000 UTC m=+0.070179089 container create 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:28 np0005532762 podman[77104]: 2025-11-23 20:40:28.81396071 +0000 UTC m=+0.021149359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:28 np0005532762 systemd[1]: Started libpod-conmon-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope.
Nov 23 15:40:29 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:29 np0005532762 podman[77104]: 2025-11-23 20:40:29.041711059 +0000 UTC m=+0.248899778 container init 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:29 np0005532762 podman[77104]: 2025-11-23 20:40:29.049407241 +0000 UTC m=+0.256595840 container start 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:29 np0005532762 infallible_agnesi[77121]: 167 167
Nov 23 15:40:29 np0005532762 systemd[1]: libpod-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532762 podman[77104]: 2025-11-23 20:40:29.066380889 +0000 UTC m=+0.273569538 container attach 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:29 np0005532762 podman[77104]: 2025-11-23 20:40:29.066841912 +0000 UTC m=+0.274030551 container died 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:29 np0005532762 systemd[1]: var-lib-containers-storage-overlay-fa11f2b0bb9856b962f226fa9a2c8966defd92bee00dc499e59a86acbf64a777-merged.mount: Deactivated successfully.
Nov 23 15:40:29 np0005532762 podman[77104]: 2025-11-23 20:40:29.141757866 +0000 UTC m=+0.348946485 container remove 4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:40:29 np0005532762 systemd[1]: libpod-conmon-4c8a0dfcac12190cd012579bb8129bb8ecdf3cd449ac4c589b4cc40fb9e8c553.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.372082669 +0000 UTC m=+0.049341699 container create d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:29 np0005532762 systemd[1]: Started libpod-conmon-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope.
Nov 23 15:40:29 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.343244121 +0000 UTC m=+0.020503161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.450411242 +0000 UTC m=+0.127670282 container init d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.458214487 +0000 UTC m=+0.135473497 container start d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.473025593 +0000 UTC m=+0.150284603 container attach d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 23 15:40:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]:                            [--no-systemd] [--no-tmpfs]
Nov 23 15:40:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test[77171]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 23 15:40:29 np0005532762 systemd[1]: libpod-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope: Deactivated successfully.
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.641612661 +0000 UTC m=+0.318871671 container died d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:40:29 np0005532762 systemd[1]: var-lib-containers-storage-overlay-bc16f4d2876c96cee2e6a1149bfa4567b2ca6ac854e3cb6c5798e0539b9192bd-merged.mount: Deactivated successfully.
Nov 23 15:40:29 np0005532762 podman[77154]: 2025-11-23 20:40:29.828505895 +0000 UTC m=+0.505764905 container remove d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:40:29 np0005532762 systemd[1]: libpod-conmon-d83643ea7da0fdebc8408e4c615efdeb873abb8403827507f96c914fdbe69b75.scope: Deactivated successfully.
Nov 23 15:40:30 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:30 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:30 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:30 np0005532762 systemd[1]: Reloading.
Nov 23 15:40:30 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:40:30 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:40:30 np0005532762 systemd[1]: Starting Ceph osd.0 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:40:30 np0005532762 podman[77331]: 2025-11-23 20:40:30.751758297 +0000 UTC m=+0.054758196 container create 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:30 np0005532762 podman[77331]: 2025-11-23 20:40:30.71639466 +0000 UTC m=+0.019394589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:31 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:31 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:31 np0005532762 podman[77331]: 2025-11-23 20:40:31.063169962 +0000 UTC m=+0.366169871 container init 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:40:31 np0005532762 podman[77331]: 2025-11-23 20:40:31.068561307 +0000 UTC m=+0.371561206 container start 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:31 np0005532762 podman[77331]: 2025-11-23 20:40:31.072054078 +0000 UTC m=+0.375053977 container attach 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 lvm[77427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:31 np0005532762 lvm[77427]: VG ceph_vg0 finished
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 15:40:31 np0005532762 bash[77331]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 15:40:31 np0005532762 bash[77331]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 23 15:40:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 bash[77331]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 bash[77331]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:32 np0005532762 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 15:40:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:32 np0005532762 bash[77331]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 23 15:40:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate[77346]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 15:40:32 np0005532762 bash[77331]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 23 15:40:32 np0005532762 systemd[1]: libpod-518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5.scope: Deactivated successfully.
Nov 23 15:40:32 np0005532762 systemd[1]: libpod-518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5.scope: Consumed 1.230s CPU time.
Nov 23 15:40:32 np0005532762 podman[77538]: 2025-11-23 20:40:32.307855906 +0000 UTC m=+0.023508877 container died 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:40:32 np0005532762 systemd[1]: var-lib-containers-storage-overlay-4676136bc24d41b20a469446a6c0b1fe08819aa93e98fd8876dadc8ae19fdd75-merged.mount: Deactivated successfully.
Nov 23 15:40:32 np0005532762 podman[77538]: 2025-11-23 20:40:32.346208809 +0000 UTC m=+0.061861780 container remove 518994d162ec21eecce77a9361b1e108ac4593a01aa5840d8a3f905d288097b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 23 15:40:32 np0005532762 podman[77596]: 2025-11-23 20:40:32.52357612 +0000 UTC m=+0.042375630 container create 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 23 15:40:32 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf960282a8df093e930d2c4ef14eda295e8c6a582da998673935a8c565d60ad/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:32 np0005532762 podman[77596]: 2025-11-23 20:40:32.594272313 +0000 UTC m=+0.113071853 container init 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:32 np0005532762 podman[77596]: 2025-11-23 20:40:32.501603348 +0000 UTC m=+0.020402878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:32 np0005532762 podman[77596]: 2025-11-23 20:40:32.602206061 +0000 UTC m=+0.121005571 container start 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:32 np0005532762 bash[77596]: 0867d5176dde679eedb1159c9fe9b63acffcda07d2fe1b063e26d75a6a0e4fdc
Nov 23 15:40:32 np0005532762 systemd[1]: Started Ceph osd.0 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: pidfile_write: ignore empty --pid-file
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:32 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.207810627 +0000 UTC m=+0.049855575 container create d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:40:33 np0005532762 systemd[1]: Started libpod-conmon-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope.
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:33 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.18671289 +0000 UTC m=+0.028757868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.290689451 +0000 UTC m=+0.132734429 container init d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.296938431 +0000 UTC m=+0.138983369 container start d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Nov 23 15:40:33 np0005532762 elated_diffie[77744]: 167 167
Nov 23 15:40:33 np0005532762 systemd[1]: libpod-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope: Deactivated successfully.
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.345165037 +0000 UTC m=+0.187209985 container attach d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.346679031 +0000 UTC m=+0.188723999 container died d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:40:33 np0005532762 systemd[1]: var-lib-containers-storage-overlay-443077b0fa3c2450d97ed42fa89cf2cc681443c83056b5526a19a0d6a3778b37-merged.mount: Deactivated successfully.
Nov 23 15:40:33 np0005532762 podman[77727]: 2025-11-23 20:40:33.491414082 +0000 UTC m=+0.333459030 container remove d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_diffie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:40:33 np0005532762 systemd[1]: libpod-conmon-d0dbd55ce580a48eabdfa516c2e0422cc9f1e2504766341b21245d21fa057517.scope: Deactivated successfully.
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558058fe1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:33 np0005532762 podman[77772]: 2025-11-23 20:40:33.635182128 +0000 UTC m=+0.037758137 container create 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:40:33 np0005532762 systemd[1]: Started libpod-conmon-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope.
Nov 23 15:40:33 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:33 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:33 np0005532762 podman[77772]: 2025-11-23 20:40:33.705956923 +0000 UTC m=+0.108532962 container init 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:33 np0005532762 podman[77772]: 2025-11-23 20:40:33.617579102 +0000 UTC m=+0.020155131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:33 np0005532762 podman[77772]: 2025-11-23 20:40:33.713641074 +0000 UTC m=+0.116217083 container start 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Nov 23 15:40:33 np0005532762 podman[77772]: 2025-11-23 20:40:33.71768985 +0000 UTC m=+0.120265869 container attach 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: load: jerasure load: lrc 
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:33 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 lvm[77870]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:40:34 np0005532762 lvm[77870]: VG ceph_vg0 finished
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 eager_rhodes[77788]: {}
Nov 23 15:40:34 np0005532762 systemd[1]: libpod-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Deactivated successfully.
Nov 23 15:40:34 np0005532762 podman[77772]: 2025-11-23 20:40:34.402886775 +0000 UTC m=+0.805462784 container died 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:40:34 np0005532762 systemd[1]: libpod-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Consumed 1.020s CPU time.
Nov 23 15:40:34 np0005532762 systemd[1]: var-lib-containers-storage-overlay-03db5adf52abf5a5af50c6a1537fea671d64b17e49c505d3dce7e4fa318500bf-merged.mount: Deactivated successfully.
Nov 23 15:40:34 np0005532762 podman[77772]: 2025-11-23 20:40:34.449233569 +0000 UTC m=+0.851809578 container remove 0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_rhodes, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 23 15:40:34 np0005532762 systemd[1]: libpod-conmon-0460ab23e3c04091a55df3aae3bff6d20ceec6a058103951e824f32ee63c5e45.scope: Deactivated successfully.
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount shared_bdev_used = 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Git sha 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DB SUMMARY
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DB Session ID:  T5EFLYR04KJJ2CJAS6UC
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                     Options.env: 0x558059e4ddc0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                Options.info_log: 0x558059e517a0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.write_buffer_manager: 0x558059f46a00
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.row_cache: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.wal_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.wal_compression: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_background_jobs: 4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Compression algorithms supported:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZSTD supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: adc772d2-7d85-4926-b23a-f9f15aa731bb
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434707770, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434707968, "job": 1, "event": "recovery_finished"}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: freelist init
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: freelist _read_cfg
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs umount
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) close
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bdev(0x558059e7d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluefs mount shared_bdev_used = 4718592
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Git sha 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DB SUMMARY
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DB Session ID:  T5EFLYR04KJJ2CJAS6UD
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                     Options.env: 0x558059fea310
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                Options.info_log: 0x558059e51920
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.write_buffer_manager: 0x558059f46a00
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.row_cache: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                              Options.wal_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.wal_compression: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_background_jobs: 4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Compression algorithms supported:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZSTD supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558059077350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:           Options.merge_operator: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558059e51ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5580590769b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.compression: LZ4
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.num_levels: 7
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: adc772d2-7d85-4926-b23a-f9f15aa731bb
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434972448, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434979718, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434983654, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434986577, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "adc772d2-7d85-4926-b23a-f9f15aa731bb", "db_session_id": "T5EFLYR04KJJ2CJAS6UD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930434988962, "job": 1, "event": "recovery_finished"}
Nov 23 15:40:34 np0005532762 ceph-osd[77613]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55805a04e000
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: rocksdb: DB pointer 0x558059ff8000
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: _get_class not permitted to load lua
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: _get_class not permitted to load sdk
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 load_pgs
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 load_pgs opened 0 pgs
Nov 23 15:40:35 np0005532762 ceph-osd[77613]: osd.0 0 log_to_monitors true
Nov 23 15:40:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:35.019+0000 7fc2071c1740 -1 osd.0 0 log_to_monitors true
Nov 23 15:40:36 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 23 15:40:36 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 23 15:40:36 np0005532762 podman[78448]: 2025-11-23 20:40:36.764449598 +0000 UTC m=+0.966926487 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 15:40:36 np0005532762 podman[78448]: 2025-11-23 20:40:36.865471723 +0000 UTC m=+1.067948612 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 done with init, starting boot process
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 start_boot
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 23 15:40:37 np0005532762 ceph-osd[77613]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.360040143 +0000 UTC m=+0.098534965 container create 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.282187153 +0000 UTC m=+0.020681995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:39 np0005532762 systemd[1]: Started libpod-conmon-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope.
Nov 23 15:40:39 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.563020309 +0000 UTC m=+0.301515151 container init 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.568755104 +0000 UTC m=+0.307249926 container start 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 15:40:39 np0005532762 zen_pascal[78608]: 167 167
Nov 23 15:40:39 np0005532762 systemd[1]: libpod-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope: Deactivated successfully.
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.62358401 +0000 UTC m=+0.362078882 container attach 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:39 np0005532762 podman[78592]: 2025-11-23 20:40:39.624574169 +0000 UTC m=+0.363068991 container died 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:39 np0005532762 systemd[1]: var-lib-containers-storage-overlay-fad6957373f9b8d0a72e9360c8724a7ef38145da62dd8b4364e3cbe5256ba5f7-merged.mount: Deactivated successfully.
Nov 23 15:40:40 np0005532762 podman[78592]: 2025-11-23 20:40:40.066107207 +0000 UTC m=+0.804602069 container remove 4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 15:40:40 np0005532762 systemd[1]: libpod-conmon-4520fe93de4cdb1b889e4a4f68af2e68cd2926571cbba134cc770be56f5c9810.scope: Deactivated successfully.
Nov 23 15:40:40 np0005532762 podman[78633]: 2025-11-23 20:40:40.245599188 +0000 UTC m=+0.078177268 container create a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:40:40 np0005532762 podman[78633]: 2025-11-23 20:40:40.189098483 +0000 UTC m=+0.021676583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:40:40 np0005532762 systemd[1]: Started libpod-conmon-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope.
Nov 23 15:40:40 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:40:40 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:40 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:40 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:40 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:40:40 np0005532762 podman[78633]: 2025-11-23 20:40:40.449198114 +0000 UTC m=+0.281776214 container init a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:40:40 np0005532762 podman[78633]: 2025-11-23 20:40:40.457566814 +0000 UTC m=+0.290144894 container start a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 23 15:40:40 np0005532762 podman[78633]: 2025-11-23 20:40:40.525953141 +0000 UTC m=+0.358531221 container attach a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]: [
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:    {
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "available": false,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "being_replaced": false,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "ceph_device_lvm": false,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "lsm_data": {},
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "lvs": [],
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "path": "/dev/sr0",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "rejected_reasons": [
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "Has a FileSystem",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "Insufficient space (<5GB)"
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        ],
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        "sys_api": {
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "actuators": null,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "device_nodes": [
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:                "sr0"
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            ],
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "devname": "sr0",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "human_readable_size": "482.00 KB",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "id_bus": "ata",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "model": "QEMU DVD-ROM",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "nr_requests": "2",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "parent": "/dev/sr0",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "partitions": {},
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "path": "/dev/sr0",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "removable": "1",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "rev": "2.5+",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "ro": "0",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "rotational": "1",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "sas_address": "",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "sas_device_handle": "",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "scheduler_mode": "mq-deadline",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "sectors": 0,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "sectorsize": "2048",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "size": 493568.0,
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "support_discard": "2048",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "type": "disk",
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:            "vendor": "QEMU"
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:        }
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]:    }
Nov 23 15:40:41 np0005532762 mystifying_elgamal[78649]: ]
Nov 23 15:40:41 np0005532762 systemd[1]: libpod-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope: Deactivated successfully.
Nov 23 15:40:41 np0005532762 podman[78633]: 2025-11-23 20:40:41.155586967 +0000 UTC m=+0.988165077 container died a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:40:41 np0005532762 systemd[1]: var-lib-containers-storage-overlay-6344a23d2411243019c62c5de6e9256e307f5f6a3061acd1155508beeab150fa-merged.mount: Deactivated successfully.
Nov 23 15:40:41 np0005532762 podman[78633]: 2025-11-23 20:40:41.730232673 +0000 UTC m=+1.562810753 container remove a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_elgamal, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:40:41 np0005532762 systemd[1]: libpod-conmon-a27fc3ed11e40ce239ee9355521c0485a4a7ae3df95079af43a91660699317af.scope: Deactivated successfully.
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.492 iops: 6269.861 elapsed_sec: 0.478
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: log_channel(cluster) log [WRN] : OSD bench result of 6269.861471 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 0 waiting for initial osdmap
Nov 23 15:40:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:48.206+0000 7fc203957640 -1 osd.0 0 waiting for initial osdmap
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 23 15:40:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-osd-0[77609]: 2025-11-23T20:40:48.228+0000 7fc1fe76c640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 9 state: booting -> active
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 23 15:40:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:40:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:40:59 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:40:59 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.829790116 +0000 UTC m=+0.043968826 container create bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:41:04 np0005532762 systemd[1]: Started libpod-conmon-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope.
Nov 23 15:41:04 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.901821057 +0000 UTC m=+0.115999797 container init bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.810587544 +0000 UTC m=+0.024766274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.907642074 +0000 UTC m=+0.121820784 container start bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.911194447 +0000 UTC m=+0.125373157 container attach bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:41:04 np0005532762 affectionate_ellis[79918]: 167 167
Nov 23 15:41:04 np0005532762 systemd[1]: libpod-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope: Deactivated successfully.
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.911990379 +0000 UTC m=+0.126169089 container died bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:41:04 np0005532762 systemd[1]: var-lib-containers-storage-overlay-922a57aeb57721a2f458f3450d38cc731f16953c203e8eda355a2cab9b58c1c8-merged.mount: Deactivated successfully.
Nov 23 15:41:04 np0005532762 podman[79902]: 2025-11-23 20:41:04.982640642 +0000 UTC m=+0.196819362 container remove bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 23 15:41:04 np0005532762 systemd[1]: libpod-conmon-bc05ce2cec43290cfea9d42723d0fb4721edd46f5b0965af5795c9957fc251a9.scope: Deactivated successfully.
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.031294931 +0000 UTC m=+0.031114797 container create 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:05 np0005532762 systemd[1]: Started libpod-conmon-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope.
Nov 23 15:41:05 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:41:05 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:05 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:05 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:05 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.084892592 +0000 UTC m=+0.084712468 container init 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.092298605 +0000 UTC m=+0.092118471 container start 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.095367853 +0000 UTC m=+0.095187719 container attach 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.017153874 +0000 UTC m=+0.016973760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:05 np0005532762 systemd[1]: libpod-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope: Deactivated successfully.
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.174222651 +0000 UTC m=+0.174042557 container died 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:05 np0005532762 systemd[1]: var-lib-containers-storage-overlay-9b62b1e6eeaac7927f1a32cf21ca4fe22b949db430f8c53cf62b9bc177295e04-merged.mount: Deactivated successfully.
Nov 23 15:41:05 np0005532762 podman[79936]: 2025-11-23 20:41:05.216928679 +0000 UTC m=+0.216748545 container remove 0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:05 np0005532762 systemd[1]: libpod-conmon-0d339a0e07800eb18856a0e18c57aa063d4121a570e4eb823c0e2702d57f354d.scope: Deactivated successfully.
Nov 23 15:41:05 np0005532762 systemd[1]: Reloading.
Nov 23 15:41:05 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:41:05 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:41:05 np0005532762 systemd[1]: Reloading.
Nov 23 15:41:05 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:41:05 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:41:05 np0005532762 systemd[1]: Starting Ceph mon.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:41:05 np0005532762 podman[80116]: 2025-11-23 20:41:05.960734889 +0000 UTC m=+0.037776228 container create ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:41:06 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:06 np0005532762 podman[80116]: 2025-11-23 20:41:05.942563026 +0000 UTC m=+0.019604395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:06 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:06 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:06 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c1f5686b6423e7ba841e6680e952139dba21ea00e5fbe7f67fcc68c35cb861/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:06 np0005532762 podman[80116]: 2025-11-23 20:41:06.064302177 +0000 UTC m=+0.141343546 container init ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Nov 23 15:41:06 np0005532762 podman[80116]: 2025-11-23 20:41:06.075829139 +0000 UTC m=+0.152870488 container start ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mon-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:06 np0005532762 bash[80116]: ec83ddfeced6ca540ac5dcb02096fff1389ee87d706a6fb5f966f976b514b52e
Nov 23 15:41:06 np0005532762 systemd[1]: Started Ceph mon.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: pidfile_write: ignore empty --pid-file
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: load: jerasure load: lrc 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: RocksDB version: 7.9.2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Git sha 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: DB SUMMARY
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: DB Session ID:  RYN2LDD9QR94TIN0USPF
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: CURRENT file:  CURRENT
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                         Options.error_if_exists: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.create_if_missing: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                                     Options.env: 0x560648be2c20
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                                Options.info_log: 0x560649e32e40
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                              Options.statistics: (nil)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                               Options.use_fsync: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                              Options.db_log_dir: 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                                 Options.wal_dir: 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                    Options.write_buffer_manager: 0x560649e37900
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.unordered_write: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                               Options.row_cache: None
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                              Options.wal_filter: None
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.two_write_queues: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.wal_compression: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.atomic_flush: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.max_background_jobs: 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.max_background_compactions: -1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.max_subcompactions: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                          Options.max_open_files: -1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Compression algorithms supported:
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kZSTD supported: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kXpressCompression supported: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kZlibCompression supported: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:           Options.merge_operator: 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:        Options.compaction_filter: None
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560649e32700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560649e57350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.compression: NoCompression
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.num_levels: 7
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                           Options.bloom_locality: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                               Options.ttl: 2592000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                       Options.enable_blob_files: false
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                           Options.min_blob_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466144960, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466147052, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930466147258, "job": 1, "event": "recovery_finished"}
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560649e58e00
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: DB pointer 0x560649f62000
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560649e57350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(???) e0 preinit fsid 03808be8-ae4a-5548-82e6-4a294f1bc627
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).mds e1 new map
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-11-23T20:38:56:367641+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 3314933000852226048, adjusting msgr requires
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).osd e16 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Deploying daemon crash.compute-1 on compute-1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/2074746697' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9775703-f092-47d3-b1e4-23e694631322"}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/459267552' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "71c99843-04fc-447b-a9fd-4e17520a545c"}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Deploying daemon osd.0 on compute-1
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Deploying daemon osd.1 on compute-0
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: OSD bench result of 2031.118864 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: osd.1 [v2:192.168.122.100:6802/2449545263,v1:192.168.122.100:6803/2449545263] boot
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: OSD bench result of 6269.861471 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: osd.0 [v2:192.168.122.101:6800/220289678,v1:192.168.122.101:6801/220289678] boot
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1130454146' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1425917096' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4197123902' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Deploying daemon mon.compute-2 on compute-2
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1651014750' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:06 np0005532762 ceph-mon[80135]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 23 15:41:12 np0005532762 ceph-mon[80135]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Nov 23 15:41:12 np0005532762 ceph-mon[80135]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Nov 23 15:41:12 np0005532762 ceph-mon[80135]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 23 15:41:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: Deploying daemon mon.compute-1 on compute-1
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-0 calling monitor election
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-2 calling monitor election
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 23 15:41:15 np0005532762 ceph-mon[80135]:    application not enabled on pool 'vms'
Nov 23 15:41:15 np0005532762 ceph-mon[80135]:    application not enabled on pool 'volumes'
Nov 23 15:41:15 np0005532762 ceph-mon[80135]:    application not enabled on pool 'backups'
Nov 23 15:41:15 np0005532762 ceph-mon[80135]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:15 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 15:41:16 np0005532762 ceph-mon[80135]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 23 15:41:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e17 _set_new_cache_sizes cache_size:1019939832 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: Deploying daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: mon.compute-0 calling monitor election
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: mon.compute-2 calling monitor election
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 23 15:41:17 np0005532762 ceph-mon[80135]:    application not enabled on pool 'vms'
Nov 23 15:41:17 np0005532762 ceph-mon[80135]:    application not enabled on pool 'volumes'
Nov 23 15:41:17 np0005532762 ceph-mon[80135]:    application not enabled on pool 'backups'
Nov 23 15:41:17 np0005532762 ceph-mon[80135]:    application not enabled on pool 'images'
Nov 23 15:41:17 np0005532762 ceph-mon[80135]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 23 15:41:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: mon.compute-1 calling monitor election
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2361136095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.873369252 +0000 UTC m=+0.041297056 container create 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 15:41:18 np0005532762 systemd[1]: Started libpod-conmon-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope.
Nov 23 15:41:18 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.852832318 +0000 UTC m=+0.020760162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.955558568 +0000 UTC m=+0.123486412 container init 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.962793096 +0000 UTC m=+0.130720920 container start 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.966085304 +0000 UTC m=+0.134013118 container attach 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:41:18 np0005532762 heuristic_ritchie[80281]: 167 167
Nov 23 15:41:18 np0005532762 systemd[1]: libpod-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope: Deactivated successfully.
Nov 23 15:41:18 np0005532762 podman[80265]: 2025-11-23 20:41:18.973648922 +0000 UTC m=+0.141576736 container died 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:41:19 np0005532762 systemd[1]: var-lib-containers-storage-overlay-a829ade8f18bf6c42712ac8804fc57c680fb1a9af4154c3bdab4920c950cd7ee-merged.mount: Deactivated successfully.
Nov 23 15:41:19 np0005532762 podman[80265]: 2025-11-23 20:41:19.021534364 +0000 UTC m=+0.189462188 container remove 649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:41:19 np0005532762 systemd[1]: libpod-conmon-649722fa42b7c8a8da4772e2df81a986517a43a089dfe3d3096fe3cd19af2124.scope: Deactivated successfully.
Nov 23 15:41:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:41:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:41:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:41:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.kgyerp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: Deploying daemon mgr.compute-1.kgyerp on compute-1
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 23 15:41:19 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:41:19 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:41:19 np0005532762 systemd[1]: Starting Ceph mgr.compute-1.kgyerp for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:41:19 np0005532762 podman[80421]: 2025-11-23 20:41:19.778919925 +0000 UTC m=+0.033730419 container create 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 23 15:41:19 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:19 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:19 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:19 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e81940c4a4f3b4624576586d4b63b42882f37fedff4e95d7e8c93cdad4df98/merged/var/lib/ceph/mgr/ceph-compute-1.kgyerp supports timestamps until 2038 (0x7fffffff)
Nov 23 15:41:19 np0005532762 podman[80421]: 2025-11-23 20:41:19.840973301 +0000 UTC m=+0.095783795 container init 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Nov 23 15:41:19 np0005532762 podman[80421]: 2025-11-23 20:41:19.845209029 +0000 UTC m=+0.100019523 container start 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:41:19 np0005532762 bash[80421]: 7db62be7e181db03e92260aa0f19556b1d450268d0fe5d51d3beda04ac329e42
Nov 23 15:41:19 np0005532762 podman[80421]: 2025-11-23 20:41:19.763639923 +0000 UTC m=+0.018450437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:41:19 np0005532762 systemd[1]: Started Ceph mgr.compute-1.kgyerp for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:41:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e20 e20: 2 total, 2 up, 2 in
Nov 23 15:41:19 np0005532762 ceph-mgr[80441]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:41:19 np0005532762 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:41:19 np0005532762 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 15:41:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[7.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20 pruub=11.916127205s) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active pruub 56.793205261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20 pruub=11.916127205s) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown pruub 56.793205261s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:19 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 15:41:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.007+0000 7f6f8aff0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 15:41:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.083+0000 7f6f8aff0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/3743302872' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 15:41:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:20.871+0000 7f6f8aff0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e21 e21: 2 total, 2 up, 2 in
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=12/13 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.8( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.7( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.2( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.3( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.14( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.11( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.16( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.17( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.1a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 21 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=12/12 les/c/f=13/13/0 sis=20) [0] r=0 lpr=20 pi=[12,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e21 _set_new_cache_sizes cache_size:1020053411 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: Deploying daemon crash.compute-2 on compute-2
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/39405231' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.507+0000 7f6f8aff0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:  from numpy import show_config as show_numpy_config
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.693+0000 7f6f8aff0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.765+0000 7f6f8aff0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:41:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:21.900+0000 7f6f8aff0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 23 15:41:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 23 15:41:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e22 e22: 2 total, 2 up, 2 in
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 15:41:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Nov 23 15:41:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:22 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:41:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:22.933+0000 7f6f8aff0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e23 e23: 2 total, 2 up, 2 in
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.169+0000 7f6f8aff0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.248+0000 7f6f8aff0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.318+0000 7f6f8aff0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.396+0000 7f6f8aff0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.469+0000 7f6f8aff0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.826+0000 7f6f8aff0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Nov 23 15:41:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 15:41:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:23.935+0000 7f6f8aff0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e24 e24: 2 total, 2 up, 2 in
Nov 23 15:41:24 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1243267938' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 23 15:41:24 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:24 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 23 15:41:24 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 15:41:24 np0005532762 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:24 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 15:41:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:24.390+0000 7f6f8aff0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:24 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Nov 23 15:41:24 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 23 15:41:24 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 23 15:41:24 np0005532762 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:24 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 15:41:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:24.952+0000 7f6f8aff0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.025+0000 7f6f8aff0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2261115406' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/1014258786' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]: dispatch
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "89316dd3-297e-4d1b-953e-7f2ac7cbe63c"}]': finished
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.118+0000 7f6f8aff0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.266+0000 7f6f8aff0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.339+0000 7f6f8aff0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.499+0000 7f6f8aff0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.728+0000 7f6f8aff0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 23 15:41:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:25 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 15:41:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:25.990+0000 7f6f8aff0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:26 np0005532762 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:26.061+0000 7f6f8aff0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:26 np0005532762 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55f128318d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 15:41:26 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4110558162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 23 15:41:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054715 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 23 15:41:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 23 15:41:27 np0005532762 ceph-mon[80135]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:41:27 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 23 15:41:27 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:27 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Nov 23 15:41:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Nov 23 15:41:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Nov 23 15:41:28 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/54502927' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 23 15:41:28 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 23 15:41:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 23 15:41:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 23 15:41:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203415871s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.151290894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225918770s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173805237s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225991249s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173904419s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203372002s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.151290894s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203423500s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.151290894s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225964546s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173904419s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.226203918s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174087524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225861549s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173805237s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.203300476s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.151290894s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.226050377s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174087524s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225702286s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174362183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225683212s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174362183s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225502968s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174270630s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225488663s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174270630s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225337029s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174171448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225158691s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174041748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225261688s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174163818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225074768s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.173973083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225261688s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174171448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225251198s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174163818s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225267410s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174171448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225241661s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174171448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225033760s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.173973083s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225116730s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225327492s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174407959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225315094s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174407959s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225172997s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174324036s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225155830s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174324036s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225176811s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 70.174400330s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.225156784s) [1] r=-1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.174400330s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/330844918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1f( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.11( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.10( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.15( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.16( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.9( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.7( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.2( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.f( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.e( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1c( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1a( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.1b( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.18( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 29 pg[5.4( empty local-lis/les=0/0 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=24/24 les/c/f=26/26/0 sis=29) [0] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=23/23/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: Cluster is now healthy
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:30 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:41:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 23 15:41:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 23 15:41:32 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 23 15:41:32 np0005532762 ceph-mon[80135]: Deploying daemon osd.2 on compute-2
Nov 23 15:41:32 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 23 15:41:32 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 23 15:41:33 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 23 15:41:33 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1120149195' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 23 15:41:33 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 23 15:41:33 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 23 15:41:34 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/475116719' entity='client.admin' 
Nov 23 15:41:34 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 23 15:41:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 23 15:41:35 np0005532762 ceph-mon[80135]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:41:35 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:35 np0005532762 ceph-mon[80135]: Saving service ingress.rgw.default spec with placement count:2
Nov 23 15:41:35 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 23 15:41:36 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 23 15:41:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 23 15:41:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: Saving service node-exporter spec with placement *
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: Saving service grafana spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: Saving service prometheus spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: Saving service alertmanager spec with placement compute-0;count:1
Nov 23 15:41:37 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 23 15:41:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 23 15:41:38 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/209025710' entity='client.admin' 
Nov 23 15:41:38 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/3810457862' entity='client.admin' 
Nov 23 15:41:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 23 15:41:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 23 15:41:39 np0005532762 podman[80625]: 2025-11-23 20:41:39.955304609 +0000 UTC m=+0.055523464 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:39 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:39 np0005532762 ceph-mon[80135]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 15:41:39 np0005532762 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 23 15:41:39 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1043924838' entity='client.admin' 
Nov 23 15:41:40 np0005532762 podman[80625]: 2025-11-23 20:41:40.043544464 +0000 UTC m=+0.143763229 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 23 15:41:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 23 15:41:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:40 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 23 15:41:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.840518951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969696045s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045534134s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174789429s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839921951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969192505s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.840518951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045534134s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839921951s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839769363s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969139099s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045314789s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174690247s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839769363s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045099258s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174659729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045099258s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839397430s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839397430s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839385986s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969039917s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839334488s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969017029s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839289665s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839385986s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839334488s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044927597s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174743652s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.839289665s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044927597s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838732719s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968681335s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838732719s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044606209s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174667358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.044606209s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.045314789s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838173866s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968399048s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838168144s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968414307s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838168144s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838021278s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968292236s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838173866s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838021278s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838040352s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968322754s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.837761879s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.968147278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838040352s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.837761879s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.043560028s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.174041748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838843346s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 active pruub 79.969337463s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=32 pruub=13.838843346s) [] r=-1 lpr=32 pi=[29,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.043560028s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.021019936s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 active pruub 78.151596069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:41 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=12.021019936s) [] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:41 np0005532762 python3[80818]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:41:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 23 15:41:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 23 15:41:42 np0005532762 ceph-mon[80135]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 23 15:41:42 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1122996363' entity='client.admin' 
Nov 23 15:41:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 23 15:41:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 23 15:41:43 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1003409241' entity='client.admin' 
Nov 23 15:41:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Nov 23 15:41:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1887137413' entity='client.admin' 
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:41:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.617355347s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.617315769s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174789429s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.412119865s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411488533s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411454201s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969139099s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616773129s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616761684s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174690247s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411209106s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.411164284s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969192505s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410974503s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410963058s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969039917s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616515636s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410875320s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410862923s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410815239s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410793304s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969032288s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410750389s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410736084s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969017029s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.412009239s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410283089s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616336823s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.410268784s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968681335s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616302013s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174743652s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.615995407s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.616497517s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174659729s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.615975380s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174667358s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409164429s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409146309s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968399048s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408976555s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408958435s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968322754s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408830643s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408941269s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408670425s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408656120s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968147278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/13 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408802986s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968292236s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.408891678s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.968414307s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.592020512s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.592005253s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.151596069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409728050s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=29/30 n=0 ec=24/16 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.409710884s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.969337463s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.614380836s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:41:45 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=20/21 n=0 ec=20/12 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.614360809s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.174041748s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:41:46 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 23 15:41:46 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: OSD bench result of 9936.100737 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: osd.2 [v2:192.168.122.102:6800/530987644,v1:192.168.122.102:6801/530987644] boot
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1515026058' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='mgr.14122 192.168.122.100:0/2507473718' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 23 15:41:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:47 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 23 15:41:47 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  1: '-n'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  2: 'mgr.compute-1.kgyerp'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  3: '-f'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  4: '--setuser'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  5: 'ceph'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  6: '--setgroup'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  7: 'ceph'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr respawn  exe_path /proc/self/exe
Nov 23 15:41:47 np0005532762 systemd[1]: session-30.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-22.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-29.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-28.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-23.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 30 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd[1]: session-27.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-25.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-26.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-32.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-32.scope: Consumed 59.443s CPU time.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 29 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 22 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 28 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 25 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 27 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 23 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 26 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 32 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd[1]: session-20.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 30.
Nov 23 15:41:47 np0005532762 systemd[1]: session-31.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd[1]: session-24.scope: Deactivated successfully.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 20 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 24 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Session 31 logged out. Waiting for processes to exit.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 22.
Nov 23 15:41:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 15:41:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 29.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 28.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 23.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 27.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 25.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 26.
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 32.
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 20.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 31.
Nov 23 15:41:47 np0005532762 systemd-logind[793]: Removed session 24.
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:47.503+0000 7fb2bbba8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:47.590+0000 7fb2bbba8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:41:47 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 15:41:47 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1621977935' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:41:48 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 23 15:41:48 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 23 15:41:48 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 15:41:48 np0005532762 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:48.378+0000 7fb2bbba8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:41:48 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 15:41:48 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.037+0000 7fb2bbba8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:41:49 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 23 15:41:49 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:  from numpy import show_config as show_numpy_config
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.208+0000 7fb2bbba8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.279+0000 7fb2bbba8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 15:41:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:49.418+0000 7fb2bbba8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 15:41:49 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 15:41:50 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 23 15:41:50 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.437+0000 7fb2bbba8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.644+0000 7fb2bbba8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.716+0000 7fb2bbba8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.782+0000 7fb2bbba8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.864+0000 7fb2bbba8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:50.940+0000 7fb2bbba8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:41:50 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 15:41:51 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 23 15:41:51 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.307+0000 7fb2bbba8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.407+0000 7fb2bbba8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 15:41:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:51.831+0000 7fb2bbba8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:41:51 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 15:41:52 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 23 15:41:52 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.405+0000 7fb2bbba8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.477+0000 7fb2bbba8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.558+0000 7fb2bbba8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.719+0000 7fb2bbba8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.789+0000 7fb2bbba8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:52.945+0000 7fb2bbba8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:41:52 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:41:53 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 23 15:41:53 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.161+0000 7fb2bbba8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.427+0000 7fb2bbba8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:41:53.495+0000 7fb2bbba8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55bab5a41860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 15:41:53 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 15:41:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 23 15:41:53 np0005532762 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:41:53 np0005532762 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 15:41:54 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 23 15:41:54 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 23 15:41:54 np0005532762 systemd-logind[793]: New session 33 of user ceph-admin.
Nov 23 15:41:54 np0005532762 systemd[1]: Started Session 33 of User ceph-admin.
Nov 23 15:41:54 np0005532762 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 15:41:54 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:41:54 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:41:54 np0005532762 podman[81448]: 2025-11-23 20:41:54.926674067 +0000 UTC m=+0.057897730 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:41:55 np0005532762 podman[81448]: 2025-11-23 20:41:55.021831859 +0000 UTC m=+0.153055522 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:41:55 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 23 15:41:55 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:55 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 23 15:41:56 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Bus STARTING
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Client ('192.168.122.100', 34418) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:41:55] ENGINE Bus STARTED
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:56 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 23 15:41:57 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:41:57 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:58 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 23 15:41:58 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 23 15:41:58 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:41:58 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:41:59 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Nov 23 15:41:59 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Nov 23 15:42:00 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 23 15:42:00 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  1: '-n'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  2: 'mgr.compute-1.kgyerp'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  3: '-f'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  4: '--setuser'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  5: 'ceph'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  6: '--setgroup'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  7: 'ceph'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr respawn  exe_path /proc/self/exe
Nov 23 15:42:00 np0005532762 systemd[1]: session-33.scope: Deactivated successfully.
Nov 23 15:42:00 np0005532762 systemd[1]: session-33.scope: Consumed 4.381s CPU time.
Nov 23 15:42:00 np0005532762 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Nov 23 15:42:00 np0005532762 systemd-logind[793]: Removed session 33.
Nov 23 15:42:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 15:42:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532762 ceph-mon[80135]: from='mgr.14337 192.168.122.100:0/1869846579' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 15:42:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:00.361+0000 7f8ca9229140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 15:42:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:00.460+0000 7f8ca9229140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 15:42:01 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 23 15:42:01 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 15:42:01 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/319512723' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 23 15:42:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:01.277+0000 7f8ca9229140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 15:42:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:42:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:01.899+0000 7f8ca9229140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:42:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 23 15:42:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:  from numpy import show_config as show_numpy_config
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.073+0000 7f8ca9229140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.143+0000 7f8ca9229140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 15:42:02 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 23 15:42:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:02.289+0000 7f8ca9229140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:42:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 15:42:03 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 23 15:42:03 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 15:42:03 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/2985907711' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.317+0000 7f8ca9229140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.542+0000 7f8ca9229140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.620+0000 7f8ca9229140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.690+0000 7f8ca9229140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.770+0000 7f8ca9229140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 15:42:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:03.842+0000 7f8ca9229140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:03 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 15:42:04 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 23 15:42:04 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 23 15:42:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.186+0000 7f8ca9229140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:42:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.286+0000 7f8ca9229140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 15:42:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:04.726+0000 7f8ca9229140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:04 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 15:42:04 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 23 15:42:05 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.286+0000 7f8ca9229140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.361+0000 7f8ca9229140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.447+0000 7f8ca9229140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.601+0000 7f8ca9229140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.674+0000 7f8ca9229140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 15:42:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:05.845+0000 7f8ca9229140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:05 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:42:05 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 23 15:42:05 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.111+0000 7f8ca9229140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.397+0000 7f8ca9229140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 15:42:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.469+0000 7f8ca9229140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x556b54a8d860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.695+0000 7f1f16187140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 15:42:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:06.774+0000 7f1f16187140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:42:06 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 15:42:06 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 23 15:42:07 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 23 15:42:07 np0005532762 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:07 np0005532762 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 15:42:07 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 15:42:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:07.587+0000 7f1f16187140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:07 np0005532762 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:42:07 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 15:42:07 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 23 15:42:07 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.199+0000 7f1f16187140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:  from numpy import show_config as show_numpy_config
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.357+0000 7f1f16187140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.428+0000 7f1f16187140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 15:42:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:08.562+0000 7f1f16187140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:42:08 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 15:42:08 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 23 15:42:08 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 15:42:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.554+0000 7f1f16187140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:42:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.768+0000 7f1f16187140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:42:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.842+0000 7f1f16187140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 15:42:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.910+0000 7f1f16187140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:42:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:09.995+0000 7f1f16187140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:42:09 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 15:42:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 23 15:42:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 23 15:42:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.065+0000 7f1f16187140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 15:42:10 np0005532762 systemd[1]: Stopping User Manager for UID 42477...
Nov 23 15:42:10 np0005532762 systemd[72671]: Activating special unit Exit the Session...
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped target Main User Target.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped target Basic System.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped target Paths.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped target Sockets.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped target Timers.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 15:42:10 np0005532762 systemd[72671]: Closed D-Bus User Message Bus Socket.
Nov 23 15:42:10 np0005532762 systemd[72671]: Stopped Create User's Volatile Files and Directories.
Nov 23 15:42:10 np0005532762 systemd[72671]: Removed slice User Application Slice.
Nov 23 15:42:10 np0005532762 systemd[72671]: Reached target Shutdown.
Nov 23 15:42:10 np0005532762 systemd[72671]: Finished Exit the Session.
Nov 23 15:42:10 np0005532762 systemd[72671]: Reached target Exit the Session.
Nov 23 15:42:10 np0005532762 systemd[1]: user@42477.service: Deactivated successfully.
Nov 23 15:42:10 np0005532762 systemd[1]: Stopped User Manager for UID 42477.
Nov 23 15:42:10 np0005532762 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 23 15:42:10 np0005532762 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 23 15:42:10 np0005532762 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 23 15:42:10 np0005532762 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 23 15:42:10 np0005532762 systemd[1]: Removed slice User Slice of UID 42477.
Nov 23 15:42:10 np0005532762 systemd[1]: user-42477.slice: Consumed 1min 5.143s CPU time.
Nov 23 15:42:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.428+0000 7f1f16187140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:42:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.528+0000 7f1f16187140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 15:42:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 23 15:42:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 23 15:42:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:10.990+0000 7f1f16187140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:42:10 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 15:42:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.546+0000 7f1f16187140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 15:42:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.615+0000 7f1f16187140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:42:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.692+0000 7f1f16187140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 15:42:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 15:42:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.857+0000 7f1f16187140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 15:42:11 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 23 15:42:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:11.931+0000 7f1f16187140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:42:11 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 15:42:11 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 23 15:42:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.092+0000 7f1f16187140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:42:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.310+0000 7f1f16187140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 15:42:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.581+0000 7f1f16187140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 15:42:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:42:12.651+0000 7f1f16187140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55fea3e11860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 15:42:12 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 15:42:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 23 15:42:12 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 23 15:42:12 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 23 15:42:13 np0005532762 systemd-logind[793]: New session 34 of user ceph-admin.
Nov 23 15:42:13 np0005532762 systemd[1]: Created slice User Slice of UID 42477.
Nov 23 15:42:13 np0005532762 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 23 15:42:13 np0005532762 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 23 15:42:13 np0005532762 systemd[1]: Starting User Manager for UID 42477...
Nov 23 15:42:13 np0005532762 systemd[82658]: Queued start job for default target Main User Target.
Nov 23 15:42:13 np0005532762 systemd[82658]: Created slice User Application Slice.
Nov 23 15:42:13 np0005532762 systemd[82658]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 15:42:13 np0005532762 systemd[82658]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:42:13 np0005532762 systemd[82658]: Reached target Paths.
Nov 23 15:42:13 np0005532762 systemd[82658]: Reached target Timers.
Nov 23 15:42:13 np0005532762 systemd[82658]: Starting D-Bus User Message Bus Socket...
Nov 23 15:42:13 np0005532762 systemd[82658]: Starting Create User's Volatile Files and Directories...
Nov 23 15:42:13 np0005532762 systemd[82658]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:42:13 np0005532762 systemd[82658]: Reached target Sockets.
Nov 23 15:42:13 np0005532762 systemd[82658]: Finished Create User's Volatile Files and Directories.
Nov 23 15:42:13 np0005532762 systemd[82658]: Reached target Basic System.
Nov 23 15:42:13 np0005532762 systemd[82658]: Reached target Main User Target.
Nov 23 15:42:13 np0005532762 systemd[82658]: Startup finished in 134ms.
Nov 23 15:42:13 np0005532762 systemd[1]: Started User Manager for UID 42477.
Nov 23 15:42:13 np0005532762 systemd[1]: Started Session 34 of User ceph-admin.
Nov 23 15:42:13 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 23 15:42:14 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:42:15 np0005532762 podman[82795]: 2025-11-23 20:42:15.34406769 +0000 UTC m=+1.174523850 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e2 new map
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e2 print_map#012e2#012btime 2025-11-23T20:42:15:389935+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:15.389822+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Nov 23 15:42:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 23 15:42:15 np0005532762 podman[82795]: 2025-11-23 20:42:15.454088889 +0000 UTC m=+1.284545029 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Bus STARTING
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Bus STARTED
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:42:14] ENGINE Client ('192.168.122.100', 49202) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:42:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-1 to 128.0M
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-1 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Adjusting osd_memory_target on compute-0 to 127.9M
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: Deploying daemon node-exporter.compute-0 on compute-0
Nov 23 15:42:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 23 15:42:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:22 np0005532762 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:22 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 23 15:42:22 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1678765881' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 23 15:42:23 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:23 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:23 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:23 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:23 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:23 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:23 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:23 np0005532762 ceph-mon[80135]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 23 15:42:23 np0005532762 systemd[1]: Starting Ceph node-exporter.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:23 np0005532762 bash[84139]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 23 15:42:24 np0005532762 bash[84139]: Getting image source signatures
Nov 23 15:42:24 np0005532762 bash[84139]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 23 15:42:24 np0005532762 bash[84139]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 23 15:42:24 np0005532762 bash[84139]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 23 15:42:24 np0005532762 bash[84139]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 23 15:42:24 np0005532762 bash[84139]: Writing manifest to image destination
Nov 23 15:42:24 np0005532762 podman[84139]: 2025-11-23 20:42:24.992524169 +0000 UTC m=+1.056002415 container create 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:25 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c505063cb26b444322d1f6be8db3eb38f4e56399e30c28b7dad8c73418e9a0dc/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:25 np0005532762 podman[84139]: 2025-11-23 20:42:25.047404136 +0000 UTC m=+1.110882392 container init 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:25 np0005532762 podman[84139]: 2025-11-23 20:42:25.051645869 +0000 UTC m=+1.115124115 container start 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:42:25 np0005532762 bash[84139]: 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770
Nov 23 15:42:25 np0005532762 podman[84139]: 2025-11-23 20:42:24.973488836 +0000 UTC m=+1.036967112 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.058Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.058Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=arp
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=bcache
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=bonding
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=cpu
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=dmi
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=edac
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=entropy
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=filefd
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netclass
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netdev
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=netstat
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nfs
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=nvme
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=os
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=pressure
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=rapl
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=selinux
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=softnet
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.059Z caller=node_exporter.go:117 level=info collector=stat
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=textfile
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=time
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=uname
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=xfs
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=node_exporter.go:117 level=info collector=zfs
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 23 15:42:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1[84215]: ts=2025-11-23T20:42:25.060Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 23 15:42:25 np0005532762 systemd[1]: Started Ceph node-exporter.compute-1 for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:25 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:25 np0005532762 ceph-mon[80135]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 23 15:42:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:27 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/1987053989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 23 15:42:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:42:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.cwocqr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-2.cwocqr on compute-2
Nov 23 15:42:33 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.exwrda", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.100255854 +0000 UTC m=+0.040966830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.230472475 +0000 UTC m=+0.171183431 container create a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 23 15:42:35 np0005532762 systemd[1]: Started libpod-conmon-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope.
Nov 23 15:42:35 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.386556875 +0000 UTC m=+0.327267851 container init a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.397281376 +0000 UTC m=+0.337992332 container start a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 23 15:42:35 np0005532762 adoring_colden[84332]: 167 167
Nov 23 15:42:35 np0005532762 systemd[1]: libpod-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope: Deactivated successfully.
Nov 23 15:42:35 np0005532762 conmon[84332]: conmon a1c060fb663cb949795d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope/container/memory.events
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.45535648 +0000 UTC m=+0.396067436 container attach a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.456205251 +0000 UTC m=+0.396916207 container died a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:35 np0005532762 systemd[1]: var-lib-containers-storage-overlay-fcec6bf971eda074fd2853f7883ea835e2ff20397a05c0103263b591c7f3f0ac-merged.mount: Deactivated successfully.
Nov 23 15:42:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 23 15:42:35 np0005532762 podman[84317]: 2025-11-23 20:42:35.628902226 +0000 UTC m=+0.569613182 container remove a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:35 np0005532762 systemd[1]: libpod-conmon-a1c060fb663cb949795d5899a097ddf6c2abfb145c1215805978c43509c15c98.scope: Deactivated successfully.
Nov 23 15:42:35 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:35 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:35 np0005532762 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-1.exwrda on compute-1
Nov 23 15:42:35 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/1418789177' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 15:42:35 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 23 15:42:35 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:35 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:35 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:35 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:36 np0005532762 systemd[1]: Starting Ceph rgw.rgw.compute-1.exwrda for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:36 np0005532762 podman[84479]: 2025-11-23 20:42:36.395616966 +0000 UTC m=+0.040042017 container create 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:36 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:36 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:36 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:36 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e62e0742dadffaf92198a8f5dab2c9e91c3c278e894a753e4c2e260330598c5b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.exwrda supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:36 np0005532762 podman[84479]: 2025-11-23 20:42:36.45452591 +0000 UTC m=+0.098950981 container init 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:36 np0005532762 podman[84479]: 2025-11-23 20:42:36.460718381 +0000 UTC m=+0.105143432 container start 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 23 15:42:36 np0005532762 bash[84479]: 29b10272a4e9e5cf00b639059e82bf80ac94e4fab69c520ec6e9080d5ceb68c1
Nov 23 15:42:36 np0005532762 podman[84479]: 2025-11-23 20:42:36.37977524 +0000 UTC m=+0.024200321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:36 np0005532762 systemd[1]: Started Ceph rgw.rgw.compute-1.exwrda for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:36 np0005532762 radosgw[84498]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:36 np0005532762 radosgw[84498]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 23 15:42:36 np0005532762 radosgw[84498]: framework: beast
Nov 23 15:42:36 np0005532762 radosgw[84498]: framework conf key: endpoint, val: 192.168.122.101:8082
Nov 23 15:42:36 np0005532762 radosgw[84498]: init_numa not setting numa affinity
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lntkpb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:36 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: Deploying daemon rgw.rgw.compute-0.lntkpb on compute-0
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 23 15:42:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 44 pg[10.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 23 15:42:39 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 45 pg[10.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [0] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.utubtn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:39 np0005532762 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-2.utubtn on compute-2
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:40 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 23 15:42:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 23 15:42:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jcbopz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-0.jcbopz on compute-0
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e3 new map
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e3 print_map#012e3#012btime 2025-11-23T20:42:42:276651+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:15.389822+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.utubtn{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e4 new map
Nov 23 15:42:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e4 print_map#012e4#012btime 2025-11-23T20:42:42:291982+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:42.291972+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.utubtn{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 23 15:42:42 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 48 pg[12.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: daemon mds.cephfs.compute-2.utubtn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: daemon mds.cephfs.compute-2.utubtn is now active in filesystem cephfs as rank 0
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.gmfhnm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 23 15:42:43 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 49 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e5 new map
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e5 print_map#012e5#012btime 2025-11-23T20:42:43:300630+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e6 new map
Nov 23 15:42:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e6 print_map#012e6#012btime 2025-11-23T20:42:43:320643+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.348672673 +0000 UTC m=+0.038459619 container create 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:42:43 np0005532762 systemd[1]: Started libpod-conmon-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope.
Nov 23 15:42:43 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.426702463 +0000 UTC m=+0.116489429 container init 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.332039627 +0000 UTC m=+0.021826593 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.433518328 +0000 UTC m=+0.123305274 container start 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.436538302 +0000 UTC m=+0.126325278 container attach 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Nov 23 15:42:43 np0005532762 competent_lalande[85191]: 167 167
Nov 23 15:42:43 np0005532762 systemd[1]: libpod-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope: Deactivated successfully.
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.439076273 +0000 UTC m=+0.128863219 container died 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:42:43 np0005532762 systemd[1]: var-lib-containers-storage-overlay-88f227109d53bd65d9cd9ce01490e3ff5c635129845f762ff145e7e43b658a1e-merged.mount: Deactivated successfully.
Nov 23 15:42:43 np0005532762 podman[85176]: 2025-11-23 20:42:43.476473314 +0000 UTC m=+0.166260270 container remove 8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:42:43 np0005532762 systemd[1]: libpod-conmon-8e0eaa7535316345fa465ad362d1eee43a8cb2778aec3532a5ee814fe16163f5.scope: Deactivated successfully.
Nov 23 15:42:43 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:43 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:43 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:43 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:43 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:43 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:44 np0005532762 systemd[1]: Starting Ceph mds.cephfs.compute-1.gmfhnm for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:44 np0005532762 podman[85333]: 2025-11-23 20:42:44.243100502 +0000 UTC m=+0.041292767 container create 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2cef276f6913a86c842440362a727d6e5da9d15277bccd093da7207adca4ba/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.gmfhnm supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:44 np0005532762 podman[85333]: 2025-11-23 20:42:44.302517789 +0000 UTC m=+0.100710064 container init 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 23 15:42:44 np0005532762 podman[85333]: 2025-11-23 20:42:44.308258098 +0000 UTC m=+0.106450353 container start 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 23 15:42:44 np0005532762 bash[85333]: 80ba811dbdb9350860999bfbe11c3c2b025911441594fc68256ee132d5b2f265
Nov 23 15:42:44 np0005532762 podman[85333]: 2025-11-23 20:42:44.22534757 +0000 UTC m=+0.023539845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:44 np0005532762 systemd[1]: Started Ceph mds.cephfs.compute-1.gmfhnm for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: Deploying daemon mds.cephfs.compute-1.gmfhnm on compute-1
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.101:0/4191610001' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.102:0/141380246' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 23 15:42:44 np0005532762 ceph-mds[85352]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:44 np0005532762 ceph-mds[85352]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 23 15:42:44 np0005532762 ceph-mds[85352]: main not setting numa affinity
Nov 23 15:42:44 np0005532762 ceph-mds[85352]: pidfile_write: ignore empty --pid-file
Nov 23 15:42:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mds-cephfs-compute-1-gmfhnm[85348]: starting mds.cephfs.compute-1.gmfhnm at 
Nov 23 15:42:44 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 6 from mon.2
Nov 23 15:42:44 np0005532762 radosgw[84498]: v1 topic migration: starting v1 topic migration..
Nov 23 15:42:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-rgw-rgw-compute-1-exwrda[84494]: 2025-11-23T20:42:44.571+0000 7f84b3dee980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 23 15:42:44 np0005532762 radosgw[84498]: LDAP not started since no server URIs were provided in the configuration.
Nov 23 15:42:44 np0005532762 radosgw[84498]: v1 topic migration: finished v1 topic migration
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: framework: beast
Nov 23 15:42:44 np0005532762 radosgw[84498]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 23 15:42:44 np0005532762 radosgw[84498]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: starting handler: beast
Nov 23 15:42:44 np0005532762 radosgw[84498]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 15:42:44 np0005532762 radosgw[84498]: mgrc service_daemon_register rgw.24260 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.exwrda,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=7b74c4d0-333d-4a78-943d-fd3c4abdfa87,zone_name=default,zonegroup_id=3560ca63-18fc-44aa-8d4c-f5d89c554a9f,zonegroup_name=default}
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 23 15:42:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.215887499 +0000 UTC m=+0.040585019 container create bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:42:45 np0005532762 systemd[1]: Started libpod-conmon-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope.
Nov 23 15:42:45 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.196514228 +0000 UTC m=+0.021211768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.301648618 +0000 UTC m=+0.126346228 container init bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.308194507 +0000 UTC m=+0.132892037 container start bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.312849411 +0000 UTC m=+0.137546951 container attach bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:42:45 np0005532762 goofy_goldberg[85510]: 167 167
Nov 23 15:42:45 np0005532762 systemd[1]: libpod-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope: Deactivated successfully.
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.314696026 +0000 UTC m=+0.139393576 container died bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='client.? 192.168.122.100:0/4054506421' entity='client.rgw.rgw.compute-0.lntkpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-2.cwocqr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='client.? ' entity='client.rgw.rgw.compute-1.exwrda' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.fuxuha-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:45 np0005532762 systemd[1]: var-lib-containers-storage-overlay-eb50ee9993dcf5154d2c27100485101537cd26c0ff85a4fced92bc7ac12309be-merged.mount: Deactivated successfully.
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e7 new map
Nov 23 15:42:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e7 print_map#012e7#012btime 2025-11-23T20:42:45:339402+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:43.300628+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:45 np0005532762 podman[85493]: 2025-11-23 20:42:45.551686286 +0000 UTC m=+0.376383806 container remove bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_goldberg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 23 15:42:45 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 7 from mon.2
Nov 23 15:42:45 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Monitors have assigned me to become a standby
Nov 23 15:42:45 np0005532762 systemd[1]: libpod-conmon-bf271ae442860ffb8b73185e132ae6ece90af78b5700f36c11375d20b301a1a0.scope: Deactivated successfully.
Nov 23 15:42:45 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:45 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:45 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:45 np0005532762 systemd[1]: Reloading.
Nov 23 15:42:45 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:42:45 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:42:46 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:42:46 np0005532762 podman[85653]: 2025-11-23 20:42:46.358127693 +0000 UTC m=+0.037333519 container create 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:42:46 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:46 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:46 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:46 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:42:46 np0005532762 podman[85653]: 2025-11-23 20:42:46.417352316 +0000 UTC m=+0.096558132 container init 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 23 15:42:46 np0005532762 podman[85653]: 2025-11-23 20:42:46.422347627 +0000 UTC m=+0.101553413 container start 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:42:46 np0005532762 bash[85653]: 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539
Nov 23 15:42:46 np0005532762 podman[85653]: 2025-11-23 20:42:46.340283548 +0000 UTC m=+0.019489364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:42:46 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.0.0.compute-1.fuxuha-rgw
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Bind address in nfs.cephfs.0.0.compute-1.fuxuha's ganesha conf is defaulting to empty
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Deploying daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: Cluster is now healthy
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e8 new map
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e8 print_map#012e8#012btime 2025-11-23T20:42:46:698669+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e9 new map
Nov 23 15:42:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e9 print_map#012e9#012btime 2025-11-23T20:42:47:710992+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:48 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e10 new map
Nov 23 15:42:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).mds e10 print_map#012e10#012btime 2025-11-23T20:42:49:046556+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T20:42:15.389822+0000#012modified#0112025-11-23T20:42:46.341150+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24181}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24181 members: 24181#012[mds.cephfs.compute-2.utubtn{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3232844591,v1:192.168.122.102:6805/3232844591] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jcbopz{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3257423559,v1:192.168.122.100:6807/3257423559] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.gmfhnm{-1:24284} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3633651935,v1:192.168.122.101:6805/3633651935] compat {c=[1],r=[1],i=[1fff]}]
Nov 23 15:42:49 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Updating MDS map to version 10 from mon.2
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:42:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:42:50 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:50 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:50 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:50 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqbktw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:51 np0005532762 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:51 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.1.0.compute-2.dqbktw-rgw
Nov 23 15:42:51 np0005532762 ceph-mon[80135]: Bind address in nfs.cephfs.1.0.compute-2.dqbktw's ganesha conf is defaulting to empty
Nov 23 15:42:51 np0005532762 ceph-mon[80135]: Deploying daemon nfs.cephfs.1.0.compute-2.dqbktw on compute-2
Nov 23 15:42:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 23 15:42:52 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 23 15:42:53 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy
Nov 23 15:42:53 np0005532762 ceph-mon[80135]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 23 15:42:53 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:42:55 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 23 15:42:55 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 23 15:42:55 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 23 15:42:55 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bfglcy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 23 15:42:56 np0005532762 ceph-mon[80135]: Rados config object exists: conf-nfs.cephfs
Nov 23 15:42:56 np0005532762 ceph-mon[80135]: Creating key for client.nfs.cephfs.2.0.compute-0.bfglcy-rgw
Nov 23 15:42:56 np0005532762 ceph-mon[80135]: Bind address in nfs.cephfs.2.0.compute-0.bfglcy's ganesha conf is defaulting to empty
Nov 23 15:42:56 np0005532762 ceph-mon[80135]: Deploying daemon nfs.cephfs.2.0.compute-0.bfglcy on compute-0
Nov 23 15:42:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:42:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:42:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:42:58 np0005532762 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-1.iwomei on compute-1
Nov 23 15:42:59 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.632821732 +0000 UTC m=+2.966877935 container create 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 systemd[1]: Started libpod-conmon-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope.
Nov 23 15:43:00 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.618223107 +0000 UTC m=+2.952279330 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.700535242 +0000 UTC m=+3.034591445 container init 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.706250401 +0000 UTC m=+3.040306604 container start 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.708692051 +0000 UTC m=+3.042748254 container attach 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 sad_lehmann[85930]: 0 0
Nov 23 15:43:00 np0005532762 systemd[1]: libpod-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope: Deactivated successfully.
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.711313424 +0000 UTC m=+3.045369627 container died 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 systemd[1]: var-lib-containers-storage-overlay-2b5425586260c5292cbb2df48e662a158f895e72f187fbdf44270cda1f0c3018-merged.mount: Deactivated successfully.
Nov 23 15:43:00 np0005532762 podman[85815]: 2025-11-23 20:43:00.757178681 +0000 UTC m=+3.091234884 container remove 9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82 (image=quay.io/ceph/haproxy:2.3, name=sad_lehmann)
Nov 23 15:43:00 np0005532762 systemd[1]: libpod-conmon-9873f14bf1083298c9392744801ca342cd89baf36930faa141afa81364969b82.scope: Deactivated successfully.
Nov 23 15:43:00 np0005532762 systemd[1]: Reloading.
Nov 23 15:43:00 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:00 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:01 np0005532762 systemd[1]: Reloading.
Nov 23 15:43:01 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:01 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:01 np0005532762 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.iwomei for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:01 np0005532762 podman[86074]: 2025-11-23 20:43:01.523399798 +0000 UTC m=+0.036995672 container create 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:43:01 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f9df31263d11eb37a2048e9a899a06412245a22ed7b0de6d53ce609478ae00/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:01 np0005532762 podman[86074]: 2025-11-23 20:43:01.569423779 +0000 UTC m=+0.083019673 container init 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:43:01 np0005532762 podman[86074]: 2025-11-23 20:43:01.573631241 +0000 UTC m=+0.087227115 container start 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:43:01 np0005532762 bash[86074]: 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb
Nov 23 15:43:01 np0005532762 podman[86074]: 2025-11-23 20:43:01.507727446 +0000 UTC m=+0.021323340 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 23 15:43:01 np0005532762 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.iwomei for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/204301 (2) : New worker #1 (4) forked
Nov 23 15:43:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:02 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:02 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:02 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:03 np0005532762 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-0.uvukit on compute-0
Nov 23 15:43:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:06 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950001da0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:07 np0005532762 ceph-mon[80135]: Deploying daemon haproxy.nfs.cephfs.compute-2.dxqoem on compute-2
Nov 23 15:43:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:11 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69380016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:12 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:12 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:12 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:12 np0005532762 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-1.lwmzxc on compute-1
Nov 23 15:43:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:13 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 23 15:43:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 23 15:43:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.143753947 +0000 UTC m=+3.154170866 container create 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, architecture=x86_64, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, name=keepalived, distribution-scope=public, io.openshift.expose-services=)
Nov 23 15:43:14 np0005532762 systemd[1]: Started libpod-conmon-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope.
Nov 23 15:43:14 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.125221046 +0000 UTC m=+3.135637995 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.216747904 +0000 UTC m=+3.227164823 container init 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4)
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.224795421 +0000 UTC m=+3.235212320 container start 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.228639004 +0000 UTC m=+3.239055893 container attach 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, vendor=Red Hat, Inc.)
Nov 23 15:43:14 np0005532762 pensive_wu[86289]: 0 0
Nov 23 15:43:14 np0005532762 systemd[1]: libpod-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope: Deactivated successfully.
Nov 23 15:43:14 np0005532762 conmon[86289]: conmon 7cd15430a35da61fca1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope/container/memory.events
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.232757015 +0000 UTC m=+3.243173924 container died 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 23 15:43:14 np0005532762 systemd[1]: var-lib-containers-storage-overlay-3c3c43ee5078ece2bf91e0477512df0f6301ada03b2edfc0062513138b9a6431-merged.mount: Deactivated successfully.
Nov 23 15:43:14 np0005532762 podman[86196]: 2025-11-23 20:43:14.273516137 +0000 UTC m=+3.283933036 container remove 7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f (image=quay.io/ceph/keepalived:2.2.4, name=pensive_wu, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Nov 23 15:43:14 np0005532762 systemd[1]: libpod-conmon-7cd15430a35da61fca1d997bd4e2ca801ba61ed44b801bb28f09fcf5c663991f.scope: Deactivated successfully.
Nov 23 15:43:14 np0005532762 systemd[1]: Reloading.
Nov 23 15:43:14 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:14 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:14 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 23 15:43:14 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 23 15:43:14 np0005532762 systemd[1]: Reloading.
Nov 23 15:43:14 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:43:14 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:43:14 np0005532762 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.lwmzxc for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:43:15 np0005532762 podman[86434]: 2025-11-23 20:43:15.033747429 +0000 UTC m=+0.021280659 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 23 15:43:15 np0005532762 podman[86434]: 2025-11-23 20:43:15.199238719 +0000 UTC m=+0.186771929 container create 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc.)
Nov 23 15:43:15 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e36f41e8d5e45c92a16bf1811bab47b38b430083dfb9b697e6aa3e7e1a93e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:15 np0005532762 podman[86434]: 2025-11-23 20:43:15.406003844 +0000 UTC m=+0.393537084 container init 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, distribution-scope=public, release=1793, com.redhat.component=keepalived-container)
Nov 23 15:43:15 np0005532762 podman[86434]: 2025-11-23 20:43:15.411005255 +0000 UTC m=+0.398538475 container start 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, release=1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Nov 23 15:43:15 np0005532762 bash[86434]: 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 23 15:43:15 np0005532762 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.lwmzxc for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Starting VRRP child process, pid=4
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: Startup complete
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: (VI_0) Entering BACKUP STATE (init)
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:15 2025: VRRP_Script(check_backend) succeeded
Nov 23 15:43:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 23 15:43:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:15 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 23 15:43:15 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 53 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53 pruub=13.259194374s) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active pruub 174.180419922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:15 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 53 pg[7.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53 pruub=13.259194374s) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown pruub 174.180419922s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.16( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.15( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.a( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.c( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1d( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1c( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.10( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.17( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.11( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.b( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.8( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.14( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.e( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.12( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.5( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.7( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.2( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.d( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1e( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.19( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1a( empty local-lis/les=20/21 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.17( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.0( empty local-lis/les=53/54 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.7( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.12( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.19( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=20/20 les/c/f=21/21/0 sis=53) [0] r=0 lpr=53 pi=[20,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:16 np0005532762 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-0.spcytb on compute-0
Nov 23 15:43:17 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 23 15:43:17 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 23 15:43:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 23 15:43:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:17 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 23 15:43:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 23 15:43:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:18 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Nov 23 15:43:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Nov 23 15:43:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:19 2025: (VI_0) Entering MASTER STATE
Nov 23 15:43:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:19 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 23 15:43:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 57 pg[10.0( v 50'991 (0'0,50'991] local-lis/les=44/45 n=178 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=14.416520119s) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 50'990 active pruub 180.095977783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 57 pg[10.0( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=14.416520119s) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 0'0 unknown pruub 180.095977783s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e6f28 space 0x55805b2e29d0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d5428 space 0x55805b0e4aa0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d4528 space 0x55805b2ad2c0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320168 space 0x55805b2e2760 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b3207a8 space 0x55805b345390 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7f68 space 0x55805b2e2aa0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33cc08 space 0x55805b2ad940 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2fe028 space 0x55805b2ad6d0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d4668 space 0x55805b12ede0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b321d88 space 0x55805b345e20 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b1063e8 space 0x55805b2e36d0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7248 space 0x55805b2ad530 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33c528 space 0x55805b2ad7a0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e6348 space 0x55805b2e3d50 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d47a8 space 0x55805b2ad600 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e7568 space 0x55805b2e2eb0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320208 space 0x55805b2adae0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b321ba8 space 0x55805b344280 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b302708 space 0x55805b2e32c0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2d43e8 space 0x55805b2ad390 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805adb7c48 space 0x55805b12f940 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e72e8 space 0x55805b2ac280 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2fede8 space 0x55805b374900 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b2e59c8 space 0x55805b2e3870 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b30b248 space 0x55805b344010 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b320a28 space 0x55805b2ad460 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b0e2208 space 0x55805b2e2690 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b33c5c8 space 0x55805b2ad870 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805ad6f108 space 0x55805b2ad1f0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x55805b984b40) operator()   moving buffer(0x55805b30ac08 space 0x55805b2e3ef0 0x0~1000 clean)
Nov 23 15:43:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:20 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 23 15:43:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.17( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.15( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.16( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.14( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.13( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.2( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.f( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.e( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.c( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.a( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.d( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.8( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.3( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.b( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.5( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.4( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.6( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=9.881776810s) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active pruub 176.305847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.19( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1a( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1c( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1d( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1f( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1e( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.10( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.12( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.9( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.7( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.18( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1b( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.11( v 50'991 lc 0'0 (0'0,50'991] local-lis/les=44/45 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[12.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=9.881776810s) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown pruub 176.305847168s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.0( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 50'990 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.5( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.4( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 58 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=44/44 les/c/f=45/45/0 sis=57) [0] r=0 lpr=57 pi=[44,57)/1 crt=50'991 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: Deploying daemon keepalived.nfs.cephfs.compute-2.cpybdt on compute-2
Nov 23 15:43:21 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 23 15:43:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.11( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.10( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.13( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.4( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.12( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.15( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.6( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.9( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.8( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.a( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.c( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.b( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.e( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.5( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.2( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.d( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.3( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1f( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1c( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1a( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1b( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.18( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.19( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.16( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.14( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.f( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.7( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1e( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1d( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.17( empty local-lis/les=48/49 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.15( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.9( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.5( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.0( empty local-lis/les=58/59 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.14( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.16( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1f( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.f( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 59 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=48/48 les/c/f=49/49/0 sis=58) [0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 23 15:43:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 23 15:43:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69300032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:23 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 23 15:43:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc[86449]: Sun Nov 23 20:43:23 2025: (VI_0) Entering BACKUP STATE
Nov 23 15:43:24 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:24 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 23 15:43:24 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 23 15:43:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 23 15:43:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 23 15:43:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 23 15:43:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 23 15:43:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 23 15:43:26 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 23 15:43:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.19( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.8( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=0/0 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.17( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.984288216s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.433227539s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989400864s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438369751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989379883s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438369751s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367900848s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817062378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.984259605s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.433227539s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367888451s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817062378s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982778549s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431915283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982155800s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431396484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982145309s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431396484s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.989006996s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438354492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367665291s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817108154s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.982691765s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431915283s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988914490s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438354492s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367646217s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817108154s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988936424s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438629150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988898277s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438629150s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367081642s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816986084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981494904s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431411743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.367024422s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816986084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988358498s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438400269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366985321s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816986084s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988343239s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438400269s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981382370s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431411743s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366968155s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816986084s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981339455s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431854248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.981307030s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431854248s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988631248s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.438995361s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.9( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988500595s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 active pruub 183.439117432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.9( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988471985s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.439117432s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988383293s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.438995361s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366314888s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.817001343s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366045952s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816894531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988273621s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439132690s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366248131s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.817001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.366029739s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816894531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988091469s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439178467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988256454s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439132690s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988065720s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439178467s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987947464s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439163208s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987930298s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439163208s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980611801s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.431869507s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980578423s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.431869507s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.365480423s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816894531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.365449905s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816894531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987771988s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439315796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987749100s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439315796s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364793777s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816421509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987556458s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439285278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364755630s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816421509s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980506897s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432418823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980226517s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432052612s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987535477s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439285278s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364359856s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816406250s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980488777s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432418823s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364325523s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816406250s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.980042458s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432052612s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987216949s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439407349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364044189s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816329956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.364027023s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816329956s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.987140656s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439407349s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979757309s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=58'992 lcod 58'993 mlcod 58'993 active pruub 182.432083130s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363799095s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816345215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363780975s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816345215s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979698181s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=58'992 lcod 58'993 mlcod 0'0 unknown NOTIFY pruub 182.432083130s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363586426s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816345215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.3( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986955643s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 active pruub 183.439453125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.3( v 59'1 (0'0,59'1] local-lis/les=58/59 n=1 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986624718s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=59'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.439453125s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363509178s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816345215s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988234520s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441223145s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979729652s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432739258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363109589s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816329956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363191605s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816421509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988133430s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441223145s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986459732s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.439743042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363083839s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816421509s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.979521751s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432739258s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986246109s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.439743042s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.363082886s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816329956s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362651825s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362511635s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978612900s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432525635s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988206863s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442138672s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.988192558s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442138672s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978567123s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432525635s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362186432s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.362171173s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986949921s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441360474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361430168s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.815872192s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978149414s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432601929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361415863s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.815872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986902237s) [1] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441360474s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.978117943s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432601929s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361785889s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816192627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.361577034s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816192627s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977862358s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360968590s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.815765381s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977847099s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432693481s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360881805s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.815765381s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986898422s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441864014s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986882210s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441864014s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977659225s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432678223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977678299s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432769775s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977610588s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432678223s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977665901s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432769775s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.355856895s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.811096191s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.355841637s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.811096191s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360867500s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 active pruub 185.816238403s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986061096s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.441467285s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986042976s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.441467285s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986731529s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442169189s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977339745s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432830811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986633301s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 active pruub 183.442184448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986579895s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442184448s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/48 lis/c=58/58 les/c/f=59/59/0 sis=60 pruub=10.986688614s) [2] r=-1 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 183.442169189s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.977225304s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432830811s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/20 lis/c=53/53 les/c/f=54/54/0 sis=60 pruub=13.360697746s) [1] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 185.816238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.976782799s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 182.432846069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:27 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 60 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=60 pruub=9.975978851s) [2] r=-1 lpr=60 pi=[57,60)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.432846069s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 23 15:43:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009e30 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=58'992 lcod 58'993 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=58'992 lcod 58'993 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.15( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.14( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.17( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.14( v 59'57 lc 59'56 (0'0,59'57] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=59'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.12( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.10( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.f( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.f( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.d( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.8( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.e( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.5( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.4( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.7( v 47'48 (0'0,47'48] local-lis/les=60/61 n=1 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.4( v 50'45 (0'0,50'45] local-lis/les=60/61 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.6( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=1 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.1b( v 50'45 lc 50'8 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.18( v 50'45 lc 50'19 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1d( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1b( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1c( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.12( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.10( v 58'48 lc 50'14 (0'0,58'48] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=58'48 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.11( v 43'12 lc 0'0 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.a( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1e( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[8.19( v 50'45 (0'0,50'45] local-lis/les=60/61 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=50'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[9.12( v 43'12 (0'0,43'12] local-lis/les=60/61 n=0 ec=55/42 lis/c=55/55 les/c/f=56/56/0 sis=60) [0] r=0 lpr=60 pi=[55,60)/1 crt=43'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:28 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 61 pg[11.1a( v 47'48 (0'0,47'48] local-lis/les=60/61 n=0 ec=57/46 lis/c=57/57 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[57,60)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:29 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 23 15:43:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:29 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955586433s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.431549072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955795288s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.431884766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955766678s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.431884766s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955229759s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.431549072s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955196381s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432067871s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955163002s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432067871s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955307961s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432479858s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.955270767s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432479858s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954610825s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.432510376s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954590797s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.432510376s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954823494s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433013916s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954804420s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433013916s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954584122s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433151245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954562187s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433151245s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954124451s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.433120728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=62 pruub=15.954091072s) [1] r=-1 lpr=62 pi=[57,62)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.433120728s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.6( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.2( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.5( v 59'994 (0'0,59'994] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=59'994 lcod 58'993 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 62 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [2]/[0] async=[2] r=0 lpr=61 pi=[57,61)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:30 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:30 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 23 15:43:30 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.992103577s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489990234s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.992055893s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.991169930s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489715576s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.991127968s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489715576s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990653038s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489517212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990526199s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489685059s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990082741s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489517212s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990076065s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489517212s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.986392975s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.485916138s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.13( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.986267090s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.485916138s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989757538s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489517212s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989747047s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489562988s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.990136147s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489685059s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.15( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989697456s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489562988s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989837646s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489913940s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.985430717s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.485565186s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989803314s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489913940s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.985372543s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.485565186s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989325523s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.489791870s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.989226341s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.489791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.5( v 62'998 (0'0,62'998] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.988718987s) [2] async=[2] r=-1 lpr=63 pi=[57,63)/1 crt=59'994 lcod 62'997 mlcod 62'997 active pruub 190.489791870s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[10.5( v 62'998 (0'0,62'998] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=63 pruub=14.988603592s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=59'994 lcod 62'997 mlcod 0'0 unknown NOTIFY pruub 190.489791870s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.a( v 49'39 (0'0,49'39] local-lis/les=62/63 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.6( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.2( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 63 pg[6.e( v 49'39 lc 48'19 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: Regenerating cephadm self-signed grafana TLS certificates
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: Deploying daemon grafana.compute-0 on compute-0
Nov 23 15:43:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.979523659s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490051270s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.3( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.979463577s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490051270s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978591919s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490005493s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978328705s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490005493s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978281975s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490081787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978183746s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490036011s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=61/62 n=5 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978253365s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490081787s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.11( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978151321s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490036011s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978464127s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 190.490554810s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=61/62 n=6 ec=57/44 lis/c=61/57 les/c/f=62/58/0 sis=64 pruub=13.978434563s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.490554810s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 64 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=63) [1]/[0] async=[1] r=0 lpr=63 pi=[57,63)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500028a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.041407585s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573394775s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.041334152s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573394775s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040791512s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573303223s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040742874s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573303223s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.035518646s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.568283081s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.2( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.035380363s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.568283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040402412s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573410034s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.040328979s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573410034s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039586067s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573455811s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039546013s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573455811s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039502144s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573547363s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=63/64 n=5 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.039457321s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573547363s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.038745880s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.573318481s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.033739090s) [1] async=[1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 192.568267822s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.033395767s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.568267822s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 65 pg[10.12( v 50'991 (0'0,50'991] local-lis/les=63/64 n=6 ec=57/44 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=15.038654327s) [1] r=-1 lpr=65 pi=[57,65)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.573318481s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:34 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Nov 23 15:43:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 23 15:43:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Nov 23 15:43:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.19 deep-scrub starts
Nov 23 15:43:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.19 deep-scrub ok
Nov 23 15:43:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 23 15:43:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 23 15:43:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 23 15:43:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 23 15:43:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 23 15:43:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:39 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:39 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.3( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=67/68 n=2 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.b( v 49'39 lc 0'0 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.f( v 49'39 lc 48'1 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 68 pg[6.7( v 49'39 lc 48'21 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 23 15:43:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 23 15:43:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 23 15:43:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540013b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 23 15:43:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 23 15:43:42 np0005532762 ceph-mon[80135]: Deploying daemon haproxy.rgw.default.compute-0.pteysg on compute-0
Nov 23 15:43:42 np0005532762 ceph-mon[80135]: Health check failed: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded (PG_DEGRADED)
Nov 23 15:43:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204342 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:43:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Nov 23 15:43:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Nov 23 15:43:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 23 15:43:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:43.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 23 15:43:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Nov 23 15:43:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: Deploying daemon haproxy.rgw.default.compute-2.tmivar on compute-2
Nov 23 15:43:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540013b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:44.750365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930624750472, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7100, "num_deletes": 254, "total_data_size": 19730730, "memory_usage": 20664960, "flush_reason": "Manual Compaction"}
Nov 23 15:43:44 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625022075, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12675099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 7105, "table_properties": {"data_size": 12647598, "index_size": 17658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8773, "raw_key_size": 84866, "raw_average_key_size": 24, "raw_value_size": 12579835, "raw_average_value_size": 3601, "num_data_blocks": 779, "num_entries": 3493, "num_filter_entries": 3493, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 1763930466, "file_creation_time": 1763930624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 271761 microseconds, and 27632 cpu microseconds.
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.022133) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12675099 bytes OK
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.022151) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026799) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026814) EVENT_LOG_v1 {"time_micros": 1763930625026810, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.026829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19692908, prev total WAL file size 19694885, number of live WAL files 2.
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.030748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(12MB) 8(1648B)]
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625030830, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12676747, "oldest_snapshot_seqno": -1}
Nov 23 15:43:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 23 15:43:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3243 keys, 12671646 bytes, temperature: kUnknown
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625187185, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12671646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12644793, "index_size": 17655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 81455, "raw_average_key_size": 25, "raw_value_size": 12580115, "raw_average_value_size": 3879, "num_data_blocks": 778, "num_entries": 3243, "num_filter_entries": 3243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.187384) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12671646 bytes
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.195756) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.0 rd, 81.0 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(12.1, 0.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3498, records dropped: 255 output_compression: NoCompression
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.195773) EVENT_LOG_v1 {"time_micros": 1763930625195766, "job": 4, "event": "compaction_finished", "compaction_time_micros": 156409, "compaction_time_cpu_micros": 22598, "output_level": 6, "num_output_files": 1, "total_output_size": 12671646, "num_input_records": 3498, "num_output_records": 3243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625198245, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930625198288, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:45.030667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:45.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:45.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:46 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 23 15:43:46 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 23 15:43:46 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:46 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:46 np0005532762 ceph-mon[80135]: Deploying daemon keepalived.rgw.default.compute-0.xymmfk on compute-0
Nov 23 15:43:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009fd0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:46 np0005532762 systemd-logind[793]: New session 36 of user zuul.
Nov 23 15:43:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:46 np0005532762 systemd[1]: Started Session 36 of User zuul.
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 23 15:43:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:47.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 23 15:43:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:47.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:47 np0005532762 python3.9[86638]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:43:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.493597984s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.428329468s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.493473053s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.428329468s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.497659683s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.432708740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.497632027s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.432708740s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496999741s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=59'998 lcod 59'997 mlcod 59'997 active pruub 206.432723999s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496963501s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=59'998 lcod 59'997 mlcod 0'0 unknown NOTIFY pruub 206.432723999s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496662140s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 206.432937622s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 69 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=69 pruub=13.496646881s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.432937622s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.16 deep-scrub starts
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.16 deep-scrub ok
Nov 23 15:43:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: Deploying daemon keepalived.rgw.default.compute-2.zjypck on compute-2
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/226 objects degraded (1.327%), 2 pgs degraded)
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: Cluster is now healthy
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 23 15:43:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 70 pg[6.5( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.135625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629135721, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 434, "num_deletes": 251, "total_data_size": 372779, "memory_usage": 382584, "flush_reason": "Manual Compaction"}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629142776, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 244168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7110, "largest_seqno": 7539, "table_properties": {"data_size": 241580, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6752, "raw_average_key_size": 19, "raw_value_size": 236093, "raw_average_value_size": 672, "num_data_blocks": 26, "num_entries": 351, "num_filter_entries": 351, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930625, "oldest_key_time": 1763930625, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7171 microseconds, and 3313 cpu microseconds.
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.142812) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 244168 bytes OK
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.142830) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144260) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144273) EVENT_LOG_v1 {"time_micros": 1763930629144269, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.144287) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 369934, prev total WAL file size 388166, number of live WAL files 2.
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.145235) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(238KB)], [15(12MB)]
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629145313, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12915814, "oldest_snapshot_seqno": -1}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3074 keys, 11707743 bytes, temperature: kUnknown
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629272230, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11707743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11682846, "index_size": 16084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 79263, "raw_average_key_size": 25, "raw_value_size": 11621758, "raw_average_value_size": 3780, "num_data_blocks": 702, "num_entries": 3074, "num_filter_entries": 3074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.272647) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11707743 bytes
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.276647) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(100.8) write-amplify(47.9) OK, records in: 3594, records dropped: 520 output_compression: NoCompression
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.276685) EVENT_LOG_v1 {"time_micros": 1763930629276668, "job": 6, "event": "compaction_finished", "compaction_time_micros": 127015, "compaction_time_cpu_micros": 47136, "output_level": 6, "num_output_files": 1, "total_output_size": 11707743, "num_input_records": 3594, "num_output_records": 3074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629276950, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930629280839, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.145105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:43:49.280986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:43:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009ff0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540027f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[6.d( v 49'39 lc 48'13 (0'0,49'39] local-lis/les=70/71 n=1 ec=53/18 lis/c=60/60 les/c/f=61/61/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[6.5( v 49'39 lc 48'11 (0'0,49'39] local-lis/les=70/71 n=2 ec=53/18 lis/c=60/60 les/c/f=61/62/0 sis=70) [0] r=0 lpr=70 pi=[60,70)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.4( v 59'998 (0'0,59'998] local-lis/les=70/71 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=59'998 lcod 59'997 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 71 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:43:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:50 np0005532762 ceph-mon[80135]: Deploying daemon prometheus.compute-0 on compute-0
Nov 23 15:43:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846215248s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761810303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.4( v 71'1001 (0'0,71'1001] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846132278s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=59'998 lcod 71'1000 mlcod 71'1000 active pruub 210.761749268s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846039772s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761657715s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846134186s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761810303s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845974922s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.4( v 71'1001 (0'0,71'1001] local-lis/les=70/71 n=6 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.846064568s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=59'998 lcod 71'1000 mlcod 0'0 unknown NOTIFY pruub 210.761749268s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845782280s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 210.761657715s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:43:50 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 72 pg[10.14( v 50'991 (0'0,50'991] local-lis/les=70/71 n=5 ec=57/44 lis/c=70/57 les/c/f=71/58/0 sis=72 pruub=14.845746994s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.761657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:43:50 np0005532762 python3.9[86853]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:43:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:51.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:51.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 23 15:43:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:43:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 23 15:43:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:53.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:53.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:53 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:43:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:43:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:56 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' 
Nov 23 15:43:56 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 23 15:43:56 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 23 15:43:56 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 23 15:43:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:43:56 np0005532762 systemd[1]: session-34.scope: Deactivated successfully.
Nov 23 15:43:56 np0005532762 systemd[1]: session-34.scope: Consumed 18.391s CPU time.
Nov 23 15:43:56 np0005532762 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Nov 23 15:43:56 np0005532762 systemd-logind[793]: Removed session 34.
Nov 23 15:43:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setuser ceph since I am not root
Nov 23 15:43:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: ignoring --setgroup ceph since I am not root
Nov 23 15:43:56 np0005532762 ceph-mgr[80441]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 23 15:43:56 np0005532762 ceph-mgr[80441]: pidfile_write: ignore empty --pid-file
Nov 23 15:43:56 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'alerts'
Nov 23 15:43:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:56.963+0000 7f9dbada4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:43:56 np0005532762 ceph-mgr[80441]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 15:43:56 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'balancer'
Nov 23 15:43:57 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Nov 23 15:43:57 np0005532762 ceph-mon[80135]: from='mgr.14424 192.168.122.100:0/3245007846' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 23 15:43:57 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Nov 23 15:43:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:57.042+0000 7f9dbada4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532762 ceph-mgr[80441]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'cephadm'
Nov 23 15:43:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:57.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:43:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:57.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:43:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:57 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'crash'
Nov 23 15:43:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:57.832+0000 7f9dbada4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532762 ceph-mgr[80441]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 15:43:57 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'dashboard'
Nov 23 15:43:58 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 23 15:43:58 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'devicehealth'
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.462+0000 7f9dbada4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]:  from numpy import show_config as show_numpy_config
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.632+0000 7f9dbada4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'influx'
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.705+0000 7f9dbada4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'insights'
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'iostat'
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:43:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:58.847+0000 7f9dbada4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 15:43:58 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'k8sevents'
Nov 23 15:43:59 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 23 15:43:59 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'localpool'
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 15:43:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:43:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:43:59.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'mirroring'
Nov 23 15:43:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:43:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 23 15:43:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:43:59.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'nfs'
Nov 23 15:43:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:43:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:43:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:43:59.854+0000 7f9dbada4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 15:43:59 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'orchestrator'
Nov 23 15:44:00 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.061+0000 7f9dbada4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 15:44:00 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.134+0000 7f9dbada4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'osd_support'
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.203+0000 7f9dbada4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.287+0000 7f9dbada4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'progress'
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.357+0000 7f9dbada4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'prometheus'
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.705+0000 7f9dbada4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rbd_support'
Nov 23 15:44:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:00.800+0000 7f9dbada4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 15:44:00 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'restful'
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rgw'
Nov 23 15:44:01 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 23 15:44:01 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.238+0000 7f9dbada4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'rook'
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:01.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:01.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:01 np0005532762 systemd[1]: session-36.scope: Deactivated successfully.
Nov 23 15:44:01 np0005532762 systemd[1]: session-36.scope: Consumed 8.014s CPU time.
Nov 23 15:44:01 np0005532762 systemd-logind[793]: Session 36 logged out. Waiting for processes to exit.
Nov 23 15:44:01 np0005532762 systemd-logind[793]: Removed session 36.
Nov 23 15:44:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.805+0000 7f9dbada4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'selftest'
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.877+0000 7f9dbada4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'snap_schedule'
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:44:01 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'stats'
Nov 23 15:44:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:01.963+0000 7f9dbada4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 23 15:44:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'status'
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telegraf'
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.118+0000 7f9dbada4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'telemetry'
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.190+0000 7f9dbada4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.353+0000 7f9dbada4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003a30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'volumes'
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.577+0000 7f9dbada4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Loading python module 'zabbix'
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.855+0000 7f9dbada4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 2025-11-23T20:44:02.927+0000 7f9dbada4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr load Constructed class from module: dashboard
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: mgr load Constructed class from module: prometheus
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Starting engine...
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: ms_deliver_dispatch: unhandled message 0x55875b587860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [prometheus INFO root] Starting engine...
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:02] ENGINE Bus STARTING
Nov 23 15:44:02 np0005532762 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:02] ENGINE Bus STARTING
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: CherryPy Checker:
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: The Application mounted at '' has an empty config.
Nov 23 15:44:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: 
Nov 23 15:44:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 23 15:44:02 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 23 15:44:03 np0005532762 ceph-mgr[80441]: [dashboard INFO root] Engine started...
Nov 23 15:44:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 15:44:03 np0005532762 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Serving on http://:::9283
Nov 23 15:44:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-mgr-compute-1-kgyerp[80437]: [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 15:44:03 np0005532762 ceph-mgr[80441]: [prometheus INFO cherrypy.error] [23/Nov/2025:20:44:03] ENGINE Bus STARTED
Nov 23 15:44:03 np0005532762 ceph-mgr[80441]: [prometheus INFO root] Engine started.
Nov 23 15:44:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00a1b0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:03.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:03 np0005532762 systemd-logind[793]: New session 37 of user ceph-admin.
Nov 23 15:44:03 np0005532762 systemd[1]: Started Session 37 of User ceph-admin.
Nov 23 15:44:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c002b10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: Active manager daemon compute-0.oyehye restarted
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: Activating manager daemon compute-0.oyehye
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: Manager daemon compute-0.oyehye is now available
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/mirror_snapshot_schedule"}]: dispatch
Nov 23 15:44:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.oyehye/trash_purge_schedule"}]: dispatch
Nov 23 15:44:03 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 23 15:44:03 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 23 15:44:04 np0005532762 podman[87106]: 2025-11-23 20:44:04.228334327 +0000 UTC m=+0.075694631 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 23 15:44:04 np0005532762 podman[87106]: 2025-11-23 20:44:04.324175877 +0000 UTC m=+0.171536161 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 23 15:44:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204404 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:04 np0005532762 podman[87225]: 2025-11-23 20:44:04.733344879 +0000 UTC m=+0.051000702 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:04 np0005532762 podman[87225]: 2025-11-23 20:44:04.768385085 +0000 UTC m=+0.086040898 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:44:04 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 23 15:44:04 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 23 15:44:05 np0005532762 podman[87318]: 2025-11-23 20:44:05.078370316 +0000 UTC m=+0.055324538 container exec 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 15:44:05 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Bus STARTING
Nov 23 15:44:05 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Serving on http://192.168.122.100:8765
Nov 23 15:44:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 23 15:44:05 np0005532762 podman[87339]: 2025-11-23 20:44:05.14410609 +0000 UTC m=+0.048833258 container exec_died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 23 15:44:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 23 15:44:05 np0005532762 podman[87318]: 2025-11-23 20:44:05.163640823 +0000 UTC m=+0.140595035 container exec_died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=76) [0] r=0 lpr=76 pi=[65,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.330209732s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 49'39 active pruub 223.509872437s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.e( v 49'39 (0'0,49'39] local-lis/les=62/63 n=1 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.330179214s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 223.509872437s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.329833984s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 49'39 active pruub 223.509826660s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 76 pg[6.6( v 49'39 (0'0,49'39] local-lis/les=62/63 n=2 ec=53/18 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.329812050s) [1] r=-1 lpr=76 pi=[62,76)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 223.509826660s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:05.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:05 np0005532762 podman[87384]: 2025-11-23 20:44:05.369349487 +0000 UTC m=+0.044553912 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:44:05 np0005532762 podman[87384]: 2025-11-23 20:44:05.380075023 +0000 UTC m=+0.055279448 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:44:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:05.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:05 np0005532762 podman[87450]: 2025-11-23 20:44:05.607113684 +0000 UTC m=+0.056600160 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., name=keepalived, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 15:44:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:05 np0005532762 podman[87450]: 2025-11-23 20:44:05.64824432 +0000 UTC m=+0.097730766 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., release=1793, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 23 15:44:05 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Serving on https://192.168.122.100:7150
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Bus STARTED
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: [23/Nov/2025:20:44:04] ENGINE Client ('192.168.122.100', 33786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=77) [0]/[1] r=-1 lpr=77 pi=[65,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 23 15:44:06 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 23 15:44:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=78) [0] r=0 lpr=78 pi=[64,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 78 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=78) [0] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 23 15:44:07 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=79) [0]/[2] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:08 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 23 15:44:08 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231281281s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 222.432464600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231248856s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.432464600s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231938362s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 222.433715820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=80 pruub=8.231907845s) [1] r=-1 lpr=80 pi=[57,80)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.433715820s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.16( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.6( v 50'991 (0'0,50'991] local-lis/les=79/80 n=6 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=77/65 les/c/f=78/66/0 sis=79) [0] r=0 lpr=79 pi=[65,79)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 80 pg[6.8( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80) [0] r=0 lpr=80 pi=[53,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.conf
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 23 15:44:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 23 15:44:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=57/58 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=57/58 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 81 pg[6.8( v 49'39 (0'0,49'39] local-lis/les=80/81 n=0 ec=53/18 lis/c=53/53 les/c/f=54/54/0 sis=80) [0] r=0 lpr=80 pi=[53,80)/1 crt=49'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.conf
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.7( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=79/64 les/c/f=80/65/0 sis=81) [0] r=0 lpr=81 pi=[64,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.17( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=79/63 les/c/f=80/64/0 sis=81) [0] r=0 lpr=81 pi=[63,81)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 82 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[57,81)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 23 15:44:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 23 15:44:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: Updating compute-0:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: Updating compute-1:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369442940s) [1] async=[1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 232.239044189s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369385719s) [1] async=[1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 232.238983154s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.18( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369388580s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.239044189s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:11 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 83 pg[10.8( v 50'991 (0'0,50'991] local-lis/les=81/82 n=6 ec=57/44 lis/c=81/57 les/c/f=82/58/0 sis=83 pruub=15.369308472s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.238983154s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:12 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 23 15:44:12 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 23 15:44:12 np0005532762 ceph-mon[80135]: Updating compute-2:/var/lib/ceph/03808be8-ae4a-5548-82e6-4a294f1bc627/config/ceph.client.admin.keyring
Nov 23 15:44:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 23 15:44:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:13.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:13 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 23 15:44:13 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 23 15:44:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:14 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 23 15:44:14 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 23 15:44:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:15 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 23 15:44:15 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 23 15:44:15 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 23 15:44:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 23 15:44:15 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 85 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [0] r=0 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:15 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 85 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=85) [0] r=0 lpr=85 pi=[64,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[64,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:16 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 86 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=86) [0]/[2] r=-1 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:16 np0005532762 ceph-mon[80135]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 23 15:44:17 np0005532762 systemd-logind[793]: New session 38 of user zuul.
Nov 23 15:44:17 np0005532762 systemd[1]: Started Session 38 of User zuul.
Nov 23 15:44:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500024d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:17.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:17 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 23 15:44:17 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: Reconfiguring mgr.compute-0.oyehye (monmap changed)...
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.oyehye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: Reconfiguring daemon mgr.compute-0.oyehye on compute-0
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 23 15:44:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 23 15:44:18 np0005532762 python3.9[88739]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 15:44:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 23 15:44:18 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: Reconfiguring osd.1 (monmap changed)...
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: Reconfiguring daemon osd.1 on compute-0
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87) [0] r=0 lpr=88 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 88 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=87) [0] r=0 lpr=88 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:19 np0005532762 python3.9[88913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 23 15:44:19 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 23 15:44:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204420 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:44:20 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 23 15:44:20 np0005532762 python3.9[89070]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:44:20 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 23 15:44:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 23 15:44:21 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 23 15:44:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:22 np0005532762 python3.9[89224]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:44:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=89) [0] r=0 lpr=89 pi=[64,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=89) [0] r=0 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=65/65 les/c/f=66/66/0 sis=89) [0]/[1] r=-1 lpr=89 pi=[65,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89 pruub=13.820906639s) [1] r=-1 lpr=89 pi=[67,89)/1 crt=49'39 mlcod 49'39 active pruub 241.261001587s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[6.b( v 49'39 (0'0,49'39] local-lis/les=67/68 n=1 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=89 pruub=13.820881844s) [1] r=-1 lpr=89 pi=[67,89)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 241.261001587s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:22 np0005532762 ceph-mon[80135]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 23 15:44:22 np0005532762 ceph-mon[80135]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 23 15:44:22 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:22 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.9( v 50'991 (0'0,50'991] local-lis/les=88/89 n=6 ec=57/44 lis/c=86/64 les/c/f=87/65/0 sis=88) [0] r=0 lpr=88 pi=[64,88)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 89 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=5 ec=57/44 lis/c=86/63 les/c/f=87/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 23 15:44:22 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 23 15:44:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:23 np0005532762 python3.9[89378]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:44:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=63/63 les/c/f=64/64/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[63,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 90 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=64/64 les/c/f=65/65/0 sis=90) [0]/[2] r=-1 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:23 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:23 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 23 15:44:23 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:23 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 23 15:44:23 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 23 15:44:24 np0005532762 python3.9[89531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:44:24 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 23 15:44:24 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:24 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:24 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:24 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 91 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:25 np0005532762 python3.9[89681]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:44:25 np0005532762 network[89698]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:44:25 np0005532762 network[89699]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:44:25 np0005532762 network[89700]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:44:25 np0005532762 ceph-mon[80135]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 23 15:44:25 np0005532762 ceph-mon[80135]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 23 15:44:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:25.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.a( v 50'991 (0'0,50'991] local-lis/les=91/92 n=6 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 92 pg[10.1a( v 50'991 (0'0,50'991] local-lis/les=91/92 n=5 ec=57/44 lis/c=89/65 les/c/f=90/66/0 sis=91) [0] r=0 lpr=91 pi=[65,91)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 23 15:44:25 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 23 15:44:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 23 15:44:26 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 93 pg[10.b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=6 ec=57/44 lis/c=90/63 les/c/f=91/64/0 sis=92) [0] r=0 lpr=92 pi=[63,92)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:26 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 93 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=5 ec=57/44 lis/c=90/64 les/c/f=91/65/0 sis=92) [0] r=0 lpr=92 pi=[64,92)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 23 15:44:26 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 23 15:44:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204426 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.268561845 +0000 UTC m=+0.037536109 container create 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:27 np0005532762 systemd[82658]: Starting Mark boot as successful...
Nov 23 15:44:27 np0005532762 systemd[82658]: Finished Mark boot as successful.
Nov 23 15:44:27 np0005532762 systemd[1]: Started libpod-conmon-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope.
Nov 23 15:44:27 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.338424152 +0000 UTC m=+0.107398446 container init 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.345995168 +0000 UTC m=+0.114969432 container start 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.250913478 +0000 UTC m=+0.019887762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.350859909 +0000 UTC m=+0.119834173 container attach 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:27 np0005532762 nifty_mcclintock[89815]: 167 167
Nov 23 15:44:27 np0005532762 systemd[1]: libpod-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope: Deactivated successfully.
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.351999686 +0000 UTC m=+0.120973940 container died 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:44:27 np0005532762 systemd[1]: var-lib-containers-storage-overlay-275b54b567039eb4177f36567a9dfde2e1eaa5c6e4149fa94ea2672acf77d657-merged.mount: Deactivated successfully.
Nov 23 15:44:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:27 np0005532762 podman[89797]: 2025-11-23 20:44:27.391331439 +0000 UTC m=+0.160305713 container remove 6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 23 15:44:27 np0005532762 systemd[1]: libpod-conmon-6e60179de44c5833bfa82dd587ae259ce5325384a435a670b16d6fce8fae7371.scope: Deactivated successfully.
Nov 23 15:44:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 23 15:44:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 23 15:44:27 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.894667859 +0000 UTC m=+0.044864120 container create 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 23 15:44:27 np0005532762 systemd[1]: Started libpod-conmon-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope.
Nov 23 15:44:27 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.878606082 +0000 UTC m=+0.028802353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.978751357 +0000 UTC m=+0.128947638 container init 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.986882138 +0000 UTC m=+0.137078399 container start 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:44:27 np0005532762 bold_feynman[89954]: 167 167
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.990169389 +0000 UTC m=+0.140365690 container attach 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:44:27 np0005532762 systemd[1]: libpod-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope: Deactivated successfully.
Nov 23 15:44:27 np0005532762 podman[89933]: 2025-11-23 20:44:27.992709242 +0000 UTC m=+0.142905533 container died 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:28 np0005532762 systemd[1]: var-lib-containers-storage-overlay-e20ace3896bd55337672ed393162a8bc4f8a3bcce828a2a1163677e8572a70ad-merged.mount: Deactivated successfully.
Nov 23 15:44:28 np0005532762 podman[89933]: 2025-11-23 20:44:28.029726976 +0000 UTC m=+0.179923227 container remove 8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:28 np0005532762 systemd[1]: libpod-conmon-8075d0eb802171a3dc0839ee6a5b6c0dec3fed263a88e1fd797d8fb92119ffb9.scope: Deactivated successfully.
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: Reconfiguring osd.0 (monmap changed)...
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: Reconfiguring daemon osd.0 on compute-1
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.546984601 +0000 UTC m=+0.035404636 container create 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:44:28 np0005532762 systemd[1]: Started libpod-conmon-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope.
Nov 23 15:44:28 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.614052359 +0000 UTC m=+0.102472394 container init 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.619217876 +0000 UTC m=+0.107637911 container start 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.622463206 +0000 UTC m=+0.110883271 container attach 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:44:28 np0005532762 quizzical_leavitt[90102]: 167 167
Nov 23 15:44:28 np0005532762 systemd[1]: libpod-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope: Deactivated successfully.
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.623993984 +0000 UTC m=+0.112414059 container died 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.531824886 +0000 UTC m=+0.020244951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:44:28 np0005532762 systemd[1]: var-lib-containers-storage-overlay-27eb6c87b561fc09f9d4f766a41daaa687926dcc9350bd0e32180b5ee2f77e02-merged.mount: Deactivated successfully.
Nov 23 15:44:28 np0005532762 podman[90081]: 2025-11-23 20:44:28.659493141 +0000 UTC m=+0.147913176 container remove 00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:44:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 23 15:44:28 np0005532762 systemd[1]: libpod-conmon-00e525469ad496afa043a9d32e76681d448577b0ba8cf92ca04d84400c1c98e8.scope: Deactivated successfully.
Nov 23 15:44:28 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 23 15:44:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:29 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 23 15:44:29 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.jtkauz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 23 15:44:29 np0005532762 python3.9[90271]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:44:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:44:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:30 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Nov 23 15:44:30 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: Reconfiguring mgr.compute-2.jtkauz (monmap changed)...
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: Reconfiguring daemon mgr.compute-2.jtkauz on compute-2
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 23 15:44:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:30 np0005532762 python3.9[90421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 23 15:44:31 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 23 15:44:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 23 15:44:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 23 15:44:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 94 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=94) [0] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 94 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=94) [0] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:32 np0005532762 python3.9[90576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:44:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 23 15:44:32 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:32 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 23 15:44:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=72/72 les/c/f=73/73/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:44:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:44:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:33 np0005532762 python3.9[90734]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:44:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:33 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 23 15:44:33 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:44:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 23 15:44:34 np0005532762 python3.9[90819]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:44:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 96 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96) [0] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 96 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=96) [0] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 23 15:44:34 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=73/73 les/c/f=74/74/0 sis=97) [0]/[1] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:34 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 97 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 23 15:44:35 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 23 15:44:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 23 15:44:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 23 15:44:35 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[10.c( v 50'991 (0'0,50'991] local-lis/les=97/98 n=6 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:35 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98) [0] r=0 lpr=98 pi=[76,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:35 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 98 pg[10.1c( v 50'991 (0'0,50'991] local-lis/les=97/98 n=5 ec=57/44 lis/c=95/72 les/c/f=96/73/0 sis=97) [0] r=0 lpr=97 pi=[72,97)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:44:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 23 15:44:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:36 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:36 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 23 15:44:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 99 pg[6.e( v 49'39 lc 48'19 (0'0,49'39] local-lis/les=98/99 n=1 ec=53/18 lis/c=76/76 les/c/f=77/77/0 sis=98) [0] r=0 lpr=98 pi=[76,98)/1 crt=49'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 23 15:44:37 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 23 15:44:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.921210289s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 active pruub 255.856842041s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.921073914s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.856842041s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=14.325005531s) [1] r=-1 lpr=100 pi=[67,100)/1 crt=49'39 mlcod 49'39 active pruub 257.261383057s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[6.f( v 49'39 (0'0,49'39] local-lis/les=67/68 n=3 ec=53/18 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=14.324955940s) [1] r=-1 lpr=100 pi=[67,100)/1 crt=49'39 mlcod 0'0 unknown NOTIFY pruub 257.261383057s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.913941383s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 active pruub 255.850540161s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=100 pruub=12.913920403s) [2] r=-1 lpr=100 pi=[81,100)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 255.850540161s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.d( v 50'991 (0'0,50'991] local-lis/les=99/100 n=8 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:37 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 100 pg[10.1d( v 50'991 (0'0,50'991] local-lis/les=99/100 n=5 ec=57/44 lis/c=97/73 les/c/f=98/74/0 sis=99) [0] r=0 lpr=99 pi=[73,99)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 23 15:44:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 23 15:44:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:38 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 101 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=81/82 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:44:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 23 15:44:39 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 23 15:44:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 102 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 102 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=81/81 les/c/f=82/82/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[81,101)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.661826134s) [2] async=[2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 50'991 active pruub 261.000366211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=7 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.661749840s) [2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 261.000366211s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.660367966s) [2] async=[2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 50'991 active pruub 261.000366211s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 103 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=101/102 n=5 ec=57/44 lis/c=101/81 les/c/f=102/82/0 sis=103 pruub=15.660287857s) [2] r=-1 lpr=103 pi=[81,103)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 261.000366211s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 23 15:44:40 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 23 15:44:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 23 15:44:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 23 15:44:41 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 23 15:44:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:44:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:44:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 23 15:44:42 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 23 15:44:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204442 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003c90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 23 15:44:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:43.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:43 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 23 15:44:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 23 15:44:44 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 23 15:44:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:44:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 23 15:44:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:45 np0005532762 ceph-osd[77613]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 23 15:44:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 23 15:44:46 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 105 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=105 pruub=11.388984680s) [2] r=-1 lpr=105 pi=[57,105)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 262.433959961s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:46 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 105 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=105 pruub=11.388861656s) [2] r=-1 lpr=105 pi=[57,105)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.433959961s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:46 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 23 15:44:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500046d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:47 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 23 15:44:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 23 15:44:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 106 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:47 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 106 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=57/58 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:44:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:47.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 23 15:44:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 23 15:44:48 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 107 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=106) [2]/[0] async=[2] r=0 lpr=106 pi=[57,106)/1 crt=50'991 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:44:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204449 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:44:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 23 15:44:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:44:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 23 15:44:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 108 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=106/57 les/c/f=107/58/0 sis=108 pruub=14.884209633s) [2] async=[2] r=-1 lpr=108 pi=[57,108)/1 crt=50'991 lcod 0'0 mlcod 0'0 active pruub 269.114807129s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:44:49 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 108 pg[10.10( v 50'991 (0'0,50'991] local-lis/les=106/107 n=2 ec=57/44 lis/c=106/57 les/c/f=107/58/0 sis=108 pruub=14.884119987s) [2] r=-1 lpr=108 pi=[57,108)/1 crt=50'991 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.114807129s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:44:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:49.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 23 15:44:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 23 15:44:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:51.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 23 15:44:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:51.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 23 15:44:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:52 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 23 15:44:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 23 15:44:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:53.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003cf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 23 15:44:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 23 15:44:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 23 15:44:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 23 15:44:54 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 23 15:44:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 23 15:44:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 23 15:44:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:44:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 23 15:44:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:57.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:44:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:44:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:44:59.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:44:59 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 23 15:44:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 23 15:44:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:44:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:44:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:44:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:44:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:44:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:00 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 23 15:45:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954002d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:01.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:01 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 23 15:45:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 23 15:45:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 15:45:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 23 15:45:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:03.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 23 15:45:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 23 15:45:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 23 15:45:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:05.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 23 15:45:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:05.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 23 15:45:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:06 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 23 15:45:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:07.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:07 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 23 15:45:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 23 15:45:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 122 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=10.758297920s) [1] r=-1 lpr=122 pi=[88,122)/1 crt=50'991 mlcod 0'0 active pruub 283.454803467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:07 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 122 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=122 pruub=10.758251190s) [1] r=-1 lpr=122 pi=[88,122)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 283.454803467s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:08 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 23 15:45:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 23 15:45:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 123 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:08 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 123 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=88/89 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 23 15:45:09 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 124 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=88/88 les/c/f=89/89/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[88,123)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 23 15:45:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=15.395946503s) [1] async=[1] r=-1 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 50'991 active pruub 290.748016357s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:10 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 125 pg[10.19( v 50'991 (0'0,50'991] local-lis/les=123/124 n=7 ec=57/44 lis/c=123/88 les/c/f=124/89/0 sis=125 pruub=15.395783424s) [1] r=-1 lpr=125 pi=[88,125)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 290.748016357s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:10 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 23 15:45:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:12 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:13.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:14 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:15.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 23 15:45:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:15 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 23 15:45:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:15.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:16 np0005532762 python3.9[91218]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:45:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:16 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:16 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 23 15:45:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 23 15:45:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 23 15:45:17 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 128 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=12.716535568s) [1] r=-1 lpr=128 pi=[92,128)/1 crt=50'991 mlcod 0'0 active pruub 295.470977783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:17 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 128 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=128 pruub=12.716499329s) [1] r=-1 lpr=128 pi=[92,128)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 295.470977783s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:18 np0005532762 python3.9[91532]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 15:45:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:18 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 23 15:45:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 23 15:45:18 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 129 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:18 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 129 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=92/93 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:19 np0005532762 python3.9[91686]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 15:45:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:19.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 23 15:45:19 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 130 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=92/92 les/c/f=93/93/0 sis=129) [1]/[0] async=[1] r=0 lpr=129 pi=[92,129)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:20 np0005532762 python3.9[91840]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:45:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 23 15:45:20 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131 pruub=15.474392891s) [1] async=[1] r=-1 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 50'991 active pruub 300.843353271s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:20 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 131 pg[10.1b( v 50'991 (0'0,50'991] local-lis/les=129/130 n=2 ec=57/44 lis/c=129/92 les/c/f=130/93/0 sis=131 pruub=15.474349976s) [1] r=-1 lpr=131 pi=[92,131)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 300.843353271s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:20 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:21 np0005532762 python3.9[91992]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 15:45:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 23 15:45:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:21.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:22 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:22 np0005532762 python3.9[92145]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:23.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:23 np0005532762 python3.9[92297]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:23.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:24 np0005532762 python3.9[92376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:45:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:24 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204524 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:45:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:25 np0005532762 python3.9[92528]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:25 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 23 15:45:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 23 15:45:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:25.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:26 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 23 15:45:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:27 np0005532762 python3.9[92683]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 15:45:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:27.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:28 np0005532762 python3.9[92837]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 15:45:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 23 15:45:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 23 15:45:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:28 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:29 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 23 15:45:29 np0005532762 python3.9[92990]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:45:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003c90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:29 np0005532762 python3.9[93143]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 15:45:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 23 15:45:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 135 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=135 pruub=14.998577118s) [2] r=-1 lpr=135 pi=[79,135)/1 crt=50'991 mlcod 0'0 active pruub 310.215698242s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 135 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=135 pruub=14.998062134s) [2] r=-1 lpr=135 pi=[79,135)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 310.215698242s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 23 15:45:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 136 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:30 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 136 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=79/80 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:30 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930002a80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 23 15:45:31 np0005532762 python3.9[93295]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:45:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500020e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 23 15:45:31 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 137 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=79/79 les/c/f=80/80/0 sis=136) [2]/[0] async=[2] r=0 lpr=136 pi=[79,136)/1 crt=50'991 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:31.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 23 15:45:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=136/79 les/c/f=137/80/0 sis=138 pruub=15.027297020s) [2] async=[2] r=-1 lpr=138 pi=[79,138)/1 crt=50'991 mlcod 50'991 active pruub 312.487762451s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1e( v 50'991 (0'0,50'991] local-lis/les=136/137 n=5 ec=57/44 lis/c=136/79 les/c/f=137/80/0 sis=138 pruub=15.027228355s) [2] r=-1 lpr=138 pi=[79,138)/1 crt=50'991 mlcod 0'0 unknown NOTIFY pruub 312.487762451s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:32 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 138 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=138) [0] r=0 lpr=138 pi=[103,138)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:32 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 23 15:45:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:32 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:33 np0005532762 python3.9[93449]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 23 15:45:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:33.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:33 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[103,139)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:33 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=103/103 les/c/f=104/104/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[103,139)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.663246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733663489, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3142, "num_deletes": 252, "total_data_size": 10546717, "memory_usage": 10929232, "flush_reason": "Manual Compaction"}
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 23 15:45:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:33.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733778937, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6620082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7544, "largest_seqno": 10681, "table_properties": {"data_size": 6606243, "index_size": 8861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34783, "raw_average_key_size": 22, "raw_value_size": 6576054, "raw_average_value_size": 4270, "num_data_blocks": 384, "num_entries": 1540, "num_filter_entries": 1540, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930629, "oldest_key_time": 1763930629, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 115723 microseconds, and 10998 cpu microseconds.
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.778977) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6620082 bytes OK
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.778995) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780330) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780349) EVENT_LOG_v1 {"time_micros": 1763930733780343, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.780367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 10531575, prev total WAL file size 10531575, number of live WAL files 2.
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.782780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6464KB)], [18(11MB)]
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733782820, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18327825, "oldest_snapshot_seqno": -1}
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4076 keys, 13908069 bytes, temperature: kUnknown
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930733992008, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13908069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13875473, "index_size": 21286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104122, "raw_average_key_size": 25, "raw_value_size": 13795477, "raw_average_value_size": 3384, "num_data_blocks": 915, "num_entries": 4076, "num_filter_entries": 4076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.992192) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13908069 bytes
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.999342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.6 rd, 66.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.3, 11.2 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4614, records dropped: 538 output_compression: NoCompression
Nov 23 15:45:33 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.999366) EVENT_LOG_v1 {"time_micros": 1763930733999356, "job": 8, "event": "compaction_finished", "compaction_time_micros": 209237, "compaction_time_cpu_micros": 27320, "output_level": 6, "num_output_files": 1, "total_output_size": 13908069, "num_input_records": 4614, "num_output_records": 4076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930734000540, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930734002647, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:33.782700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:45:34.002698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:45:34 np0005532762 python3.9[93602]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:34 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 23 15:45:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:34 np0005532762 python3.9[93680]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:34 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:45:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 23 15:45:35 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 141 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 luod=0'0 crt=50'991 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 23 15:45:35 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 141 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=0/0 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 crt=50'991 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 15:45:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003cd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:35.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:35 np0005532762 python3.9[93832]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:45:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:45:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:45:36 np0005532762 python3.9[93911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:45:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 23 15:45:36 np0005532762 ceph-osd[77613]: osd.0 pg_epoch: 142 pg[10.1f( v 50'991 (0'0,50'991] local-lis/les=141/142 n=5 ec=57/44 lis/c=139/103 les/c/f=140/104/0 sis=141) [0] r=0 lpr=141 pi=[103,141)/1 crt=50'991 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 15:45:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:36 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:37 np0005532762 python3.9[94088]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:45:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:37.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:45:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:45:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:45:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:38 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:39.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:39.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:39 np0005532762 python3.9[94322]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:40 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930003570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:40 np0005532762 python3.9[94474]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 15:45:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:45:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:41 np0005532762 python3.9[94624]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:45:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:42 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:43 np0005532762 python3.9[94777]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:43 np0005532762 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 15:45:43 np0005532762 systemd[1]: tuned.service: Deactivated successfully.
Nov 23 15:45:43 np0005532762 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 15:45:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:43 np0005532762 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 15:45:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:43.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:43 np0005532762 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 15:45:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:45:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:45:44 np0005532762 python3.9[94939]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 15:45:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:44 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:44 np0005532762 ceph-mon[80135]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Nov 23 15:45:44 np0005532762 ceph-mon[80135]: Cluster is now healthy
Nov 23 15:45:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:45.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:46 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204546 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:45:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:47.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:48 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6930004280 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:48 np0005532762 python3.9[95093]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:49 np0005532762 python3.9[95249]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:45:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924003d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:49.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:51 np0005532762 systemd[1]: session-38.scope: Deactivated successfully.
Nov 23 15:45:51 np0005532762 systemd[1]: session-38.scope: Consumed 1min 610ms CPU time.
Nov 23 15:45:51 np0005532762 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Nov 23 15:45:51 np0005532762 systemd-logind[793]: Removed session 38.
Nov 23 15:45:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:51.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:45:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:45:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:52 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:45:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:53.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:54 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500044e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:56 np0005532762 systemd-logind[793]: New session 39 of user zuul.
Nov 23 15:45:56 np0005532762 systemd[1]: Started Session 39 of User zuul.
Nov 23 15:45:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:56 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:45:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:57 np0005532762 python3.9[95485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:45:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:45:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:57.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:45:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:45:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:45:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:58 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:58 np0005532762 python3.9[95644]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 15:45:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:45:59.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:45:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:45:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:45:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:45:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:45:59.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:45:59 np0005532762 python3.9[95798]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:00 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:00 np0005532762 python3.9[95882]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:46:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:02 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:03 np0005532762 python3.9[96040]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:03.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:04 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:05 np0005532762 python3.9[96194]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:46:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:05.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:06 np0005532762 python3.9[96348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:06 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:07 np0005532762 python3.9[96500]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 15:46:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:07.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:08 np0005532762 python3.9[96651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:10 np0005532762 python3.9[96810]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:12 np0005532762 python3.9[96966]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:13.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:14 np0005532762 python3.9[97254]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:46:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:15.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:15 np0005532762 python3.9[97404]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:16 np0005532762 python3.9[97559]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:18 np0005532762 python3.9[97738]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:19.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:20 np0005532762 python3.9[97892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:21.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:21 np0005532762 python3.9[98047]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 23 15:46:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:23 np0005532762 systemd[1]: session-39.scope: Deactivated successfully.
Nov 23 15:46:23 np0005532762 systemd[1]: session-39.scope: Consumed 17.613s CPU time.
Nov 23 15:46:23 np0005532762 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Nov 23 15:46:23 np0005532762 systemd-logind[793]: Removed session 39.
Nov 23 15:46:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:23.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:28 np0005532762 systemd-logind[793]: New session 40 of user zuul.
Nov 23 15:46:28 np0005532762 systemd[1]: Started Session 40 of User zuul.
Nov 23 15:46:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:29.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:29 np0005532762 python3.9[98231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:29.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:30 np0005532762 python3.9[98386]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:32 np0005532762 python3.9[98580]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:32 np0005532762 systemd[1]: session-40.scope: Deactivated successfully.
Nov 23 15:46:32 np0005532762 systemd[1]: session-40.scope: Consumed 2.200s CPU time.
Nov 23 15:46:32 np0005532762 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Nov 23 15:46:32 np0005532762 systemd-logind[793]: Removed session 40.
Nov 23 15:46:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:37 np0005532762 systemd-logind[793]: New session 41 of user zuul.
Nov 23 15:46:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:37 np0005532762 systemd[1]: Started Session 41 of User zuul.
Nov 23 15:46:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:38 np0005532762 python3.9[98787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204639 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:46:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:39 np0005532762 python3.9[98941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:39.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:39.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:40 np0005532762 python3.9[99098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:41.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:41 np0005532762 python3.9[99182]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:41 np0005532762 systemd[1]: session-19.scope: Deactivated successfully.
Nov 23 15:46:41 np0005532762 systemd[1]: session-19.scope: Consumed 8.960s CPU time.
Nov 23 15:46:41 np0005532762 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Nov 23 15:46:41 np0005532762 systemd-logind[793]: Removed session 19.
Nov 23 15:46:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:41.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6950002920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:43.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:43 np0005532762 python3.9[99336]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:46:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000bf80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:45 np0005532762 python3.9[99534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:46:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:45.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:46:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:45.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:46:46 np0005532762 python3.9[99687]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:47 np0005532762 python3.9[99852]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:46:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:47.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:46:47 np0005532762 python3.9[99931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:46:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:47.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:48 np0005532762 python3.9[100083]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:46:49 np0005532762 python3.9[100161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:46:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:46:50 np0005532762 python3.9[100314]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:46:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:50 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:46:50 np0005532762 python3.9[100466]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:51 np0005532762 python3.9[100618]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:52 np0005532762 python3.9[100771]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:46:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:53 np0005532762 python3.9[100986]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:46:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69540035f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:46:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:46:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:46:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:46:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:46:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:55.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:55 np0005532762 python3.9[101158]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:46:56 np0005532762 python3.9[101312]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:46:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:57 np0005532762 python3.9[101492]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:46:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:57.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:58 np0005532762 python3.9[101669]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:46:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204659 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:46:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:46:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:46:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:46:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:46:59 np0005532762 python3.9[101823]: ansible-service_facts Invoked
Nov 23 15:46:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:46:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:46:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:46:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:46:59 np0005532762 network[101840]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:46:59 np0005532762 network[101841]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:46:59 np0005532762 network[101842]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:47:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:03.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:05.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:06 np0005532762 python3.9[102298]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:47:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:07.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003040 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:08 np0005532762 python3.9[102452]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 15:47:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:10 np0005532762 python3.9[102605]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:10 np0005532762 python3.9[102683]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:12 np0005532762 python3.9[102836]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:12 np0005532762 python3.9[102914]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:14 np0005532762 python3.9[103070]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:16 np0005532762 python3.9[103223]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:47:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:17 np0005532762 python3.9[103332]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954004bd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:18 np0005532762 systemd[1]: session-41.scope: Deactivated successfully.
Nov 23 15:47:18 np0005532762 systemd[1]: session-41.scope: Consumed 22.594s CPU time.
Nov 23 15:47:18 np0005532762 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Nov 23 15:47:18 np0005532762 systemd-logind[793]: Removed session 41.
Nov 23 15:47:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938003d50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:21.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:47:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:47:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:24 np0005532762 systemd-logind[793]: New session 42 of user zuul.
Nov 23 15:47:24 np0005532762 systemd[1]: Started Session 42 of User zuul.
Nov 23 15:47:24 np0005532762 python3.9[103523]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:25.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0095e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:25 np0005532762 python3.9[103678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:26 np0005532762 python3.9[103756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:26 np0005532762 systemd[1]: session-42.scope: Deactivated successfully.
Nov 23 15:47:26 np0005532762 systemd[1]: session-42.scope: Consumed 1.431s CPU time.
Nov 23 15:47:26 np0005532762 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Nov 23 15:47:26 np0005532762 systemd-logind[793]: Removed session 42.
Nov 23 15:47:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:31 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938004e50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:32 np0005532762 systemd-logind[793]: New session 43 of user zuul.
Nov 23 15:47:32 np0005532762 systemd[1]: Started Session 43 of User zuul.
Nov 23 15:47:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:33 np0005532762 python3.9[103937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:47:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:33 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009640 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:34 np0005532762 python3.9[104095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:35 np0005532762 python3.9[104270]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:35.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:35 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:35 np0005532762 python3.9[104349]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.k4wo73f1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:35.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:37 np0005532762 python3.9[104501]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009660 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:37 np0005532762 python3.9[104604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.e4qcnt0b recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:37 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:38 np0005532762 python3.9[104757]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:39 np0005532762 python3.9[104909]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:39 np0005532762 python3.9[104987]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:39 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:47:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:47:40 np0005532762 python3.9[105140]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:40 np0005532762 python3.9[105218]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:47:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0096c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:41 np0005532762 systemd[82658]: Created slice User Background Tasks Slice.
Nov 23 15:47:41 np0005532762 systemd[82658]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 15:47:41 np0005532762 systemd[82658]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 15:47:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:41.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:41 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:41 np0005532762 python3.9[105372]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:42 np0005532762 python3.9[105524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:43 np0005532762 python3.9[105602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:43 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:47:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:47:44 np0005532762 python3.9[105755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:44 np0005532762 python3.9[105833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920002550 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:45 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:47:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:47:45 np0005532762 python3.9[105986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:45 np0005532762 systemd[1]: Reloading.
Nov 23 15:47:46 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:47:46 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:47:47 np0005532762 python3.9[106175]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f692c004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:47 np0005532762 python3.9[106253]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:47.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:47 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:48 np0005532762 python3.9[106406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:48 np0005532762 python3.9[106484]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:47:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:47:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:49 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:49 np0005532762 python3.9[106637]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:47:49 np0005532762 systemd[1]: Reloading.
Nov 23 15:47:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:49.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:49 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:47:49 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:47:50 np0005532762 systemd[1]: Starting Create netns directory...
Nov 23 15:47:50 np0005532762 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:47:50 np0005532762 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:47:50 np0005532762 systemd[1]: Finished Create netns directory.
Nov 23 15:47:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:51 np0005532762 python3.9[106830]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:47:51 np0005532762 network[106847]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:47:51 np0005532762 network[106848]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:47:51 np0005532762 network[106849]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:47:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009740 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:51.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:51 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:47:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:47:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:53 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:53.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69500040f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:55 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009780 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:47:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:57 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:57.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:58 np0005532762 python3.9[107140]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:47:58 np0005532762 python3.9[107280]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:47:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:47:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:47:59 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:47:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:47:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:47:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:47:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:47:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:47:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:47:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:47:59 np0005532762 python3.9[107452]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:00 np0005532762 python3.9[107604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:01 np0005532762 python3.9[107682]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:01 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:02 np0005532762 python3.9[107837]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 15:48:02 np0005532762 systemd[1]: Starting Time & Date Service...
Nov 23 15:48:02 np0005532762 systemd[1]: Started Time & Date Service.
Nov 23 15:48:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:03 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920004530 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:03 np0005532762 python3.9[108001]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:48:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:04 np0005532762 python3.9[108171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0097e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:05 np0005532762 python3.9[108249]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:05 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:06 np0005532762 python3.9[108403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:06 np0005532762 python3.9[108481]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6aanxriz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:07 np0005532762 python3.9[108633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:07 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:08 np0005532762 python3.9[108712]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:09 np0005532762 python3.9[108864]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:09 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:09.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:09 np0005532762 python3[109018]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:48:10 np0005532762 python3.9[109170]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:11 np0005532762 python3.9[109248]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:11 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009860 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:11.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:12 np0005532762 python3.9[109401]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:12 np0005532762 python3.9[109479]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:13 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:13 np0005532762 python3.9[109632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:13.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:14 np0005532762 python3.9[109710]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:15 np0005532762 python3.9[109862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:15 np0005532762 python3.9[109941]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:15 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:15.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:16 np0005532762 python3.9[110093]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:17 np0005532762 python3.9[110171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6938002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:17 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:17.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:18 np0005532762 python3.9[110350]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:19 np0005532762 python3.9[110505]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:19 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009900 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:19.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:20 np0005532762 python3.9[110658]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:21 np0005532762 python3.9[110810]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:21 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:21.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:22 np0005532762 python3.9[110963]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:48:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:22 np0005532762 python3.9[111117]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 15:48:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:23 np0005532762 systemd[1]: session-43.scope: Deactivated successfully.
Nov 23 15:48:23 np0005532762 systemd[1]: session-43.scope: Consumed 29.733s CPU time.
Nov 23 15:48:23 np0005532762 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Nov 23 15:48:23 np0005532762 systemd-logind[793]: Removed session 43.
Nov 23 15:48:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:23 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:25 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c009920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:27.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:27 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6954001e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:28 np0005532762 systemd-logind[793]: New session 44 of user zuul.
Nov 23 15:48:28 np0005532762 systemd[1]: Started Session 44 of User zuul.
Nov 23 15:48:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695000c760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:29 np0005532762 python3.9[111300]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 15:48:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6924002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[85668]: 23/11/2025 20:48:29 : epoch 692371c6 : compute-1 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c00ad80 fd 49 proxy ignored for local
Nov 23 15:48:29 np0005532762 kernel: ganesha.nfsd[95492]: segfault at 50 ip 00007f6a0584032e sp 00007f69d57f9210 error 4 in libntirpc.so.5.8[7f6a05825000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 23 15:48:29 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:48:29 np0005532762 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 23 15:48:29 np0005532762 systemd[1]: Started Process Core Dump (PID 111326/UID 0).
Nov 23 15:48:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:30 np0005532762 python3.9[111455]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:48:30 np0005532762 systemd-coredump[111327]: Process 85672 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 68:#012#0  0x00007f6a0584032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:48:31 np0005532762 systemd[1]: systemd-coredump@0-111326-0.service: Deactivated successfully.
Nov 23 15:48:31 np0005532762 systemd[1]: systemd-coredump@0-111326-0.service: Consumed 1.142s CPU time.
Nov 23 15:48:31 np0005532762 podman[111531]: 2025-11-23 20:48:31.052822171 +0000 UTC m=+0.024876145 container died 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 23 15:48:31 np0005532762 systemd[1]: var-lib-containers-storage-overlay-b8322ba23651b391cd38f2980d80d3d4d5a77a2d7c68fccc64436bbb1b0ee305-merged.mount: Deactivated successfully.
Nov 23 15:48:31 np0005532762 podman[111531]: 2025-11-23 20:48:31.089008259 +0000 UTC m=+0.061062183 container remove 466d10d0fad1c5a4f86b3f6ff6a62c2f5b4e27c7206b481850c43d696b989539 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:48:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:48:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:48:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.990s CPU time.
Nov 23 15:48:31 np0005532762 python3.9[111659]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 23 15:48:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:31.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:32 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:48:32 np0005532762 python3.9[111812]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1u9ifphu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:48:32 np0005532762 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 15:48:33 np0005532762 python3.9[111940]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1u9ifphu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930911.8714442-103-260712237537886/.source.1u9ifphu _original_basename=.ez6migu3 follow=False checksum=6cd7b37efcd593debc42fa9bb68a32d60f10fcfa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:33.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:34 np0005532762 python3.9[112093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:35 np0005532762 python3.9[112245]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZyfELJX7KkP8E4Yo+r9guKNy64TSJDfB+rBUAclCyKwGxjxhBTRAJJCOL6kSBIkbUub9LTNVh+s271jrKlK1rYs22c1DFe3ci9hBERauX4lIaBHw9kJBHURb9cB+VbonXf0hAdqGDLTXdqFnbed2oU0ngSuVesO/C9+SCSZFsfERuUe3/SXKbWfjehgYTi4GquXo6Ynq1HopME6mRR8qGsv6sgdkxpSaUiwtSBG5ONOSyzrev1t2hdDsRxvbZAZgV2ab6IMD9DTKaIXphHpumL6txas+nKViUfm+gW6p6EKNdHb/VLha7ghY3p4LE3OdXM4eytxszF0Fzs/0CXzafNxHjVjHzqxrJBi/PT22i6QD60NTimabHulw8IkZG6KsuNVq1rmlSSGQGjqAs7l6hNH8kF4uq1JwOl6mVgct5iE+ZzhfO5WRWShiE1LlCZpqdYE9VqmBrK5r70N0srW3h2mb4lTAwvC089Vert64D29M7riepyGCrGInpE4aK7Sk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFop+sR8mOkxOfCCMKg8Voa+6Ns0zHMRLKg+WdnL56v#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQ0Rj0/OjRh0AQLkOX0VueFFf3xD5FqSzewSN/8R0Xh0Ybf7bkNUGszKaTkKSUBKR2e9V/GwA+BxEChWtzU3sY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrfRiqah4FSYlin2mt3PYchMDfWNjxPXqcCCW7iymA93OXZ1reX9dxsJRSssuxIkwaYv7OC+wrUmMOsDhULhy9uNDku8TnHodZVNms8z3UwQW2GPePqEdQ56rKSJ5DhpY0ly7PapOQ69jitmBGQjsu8go19hV3djXlFm1du9V1HMnfGqyr5REZ5ACjW2Rr0108gdYgrt/xh+1sl7cgixK0vUKaqN47/VJHXSTk20aXknt5lhurSKMbRD4cgP1pz0lBJ8LfEvFajLlXBk7MtsI8L94qtHH20hWUk8P2FmqsM4LoLIY4YkAT6kzDPkNdC5F3bpl67NzNXKLdStChVsjRVgrsR0JhU4YO8nYPSqn85KWQUMsuQhXfeMPb5a0n4vSmF0hQhaTctIIK5Yq+qK3S5Ee0tV+ZLMcrYiRfVJYjULh+8LazeUYBtZAVkOoenlHNpcxfVl2v8Fx37PYu6wY/1Ol7i+Fyg+DMculPNu0E00hYIfuSPW06sm98V0zJ7bs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC0+oolG6Djq6MTp/HXh3SEc2a8aDRu5q8AnCiNHx/fN#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1GCZqvti/wHDh2Oo7NSAFToY/dykBAXL2bgJmg9kqKO2qTzfIYtCRiGP/x9yaw+D3ymaftMgdHgFkzRtYcXz0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3+sqhh74Wal6wWv19BRNHNnjTPYKculYCUftHSfYmbg5LryLTnsWAJdalXVBYQIJtq5uFrJRBG4C0R1XMU/MT4ZxuTtafwAzeTnKoCHbN/+mH31bndpvGKYRQ9AQHmamquyDQaSEjIYKFaK6eM7uVV/PaSZqasrB6awv3MeDH/GhtlyJwY7ble8M3UtG9jMWuPq/qX+TnKCZI3COyKBCe7F3aeaIewsho+T7qsRd8UNr55SHWJ1N6xYtA4FUayJ4cCZUeo4+SOJuQWb6A3HZm75y0LpdLDFH54DqyDqKVvDUfaKJJQV++3GT9kF9+jrwJDEK9VslSlEylLZ0zg1J0Z2zyMOwOAxBKEUXQNymC+00ybwJd4trP7KDy6+ZGOtHEThBgVO6vtuxQLWhseNa3otNXh7cHTf+Jfo7uo1wHbasd6aD1AVxvt4yKgOGy1ypt9Ps/COlbfHHFYZsI5gVLyJyK8aeipUjJUe6u6Qlf/F/inV1rwRBg8li7oeW7Ss=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE96kcIFDgsK09K4ZL9HihPRGUmf4YDgXlXqtYy0M8r#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJoWf98fFp9mmY0S22K7n+FjL7cDYCGLm8eglORId7ZBFp9PG5e8P+ws6VWjBbceNazmskqBYurrlrsvB4Mu40E=#012 create=True mode=0644 path=/tmp/ansible.1u9ifphu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204835 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:48:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:36 np0005532762 python3.9[112398]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1u9ifphu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:37 np0005532762 python3.9[112576]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1u9ifphu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:48:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:48:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:37.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:38 np0005532762 systemd[1]: session-44.scope: Deactivated successfully.
Nov 23 15:48:38 np0005532762 systemd[1]: session-44.scope: Consumed 5.142s CPU time.
Nov 23 15:48:38 np0005532762 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Nov 23 15:48:38 np0005532762 systemd-logind[793]: Removed session 44.
Nov 23 15:48:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:39.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:39.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:41 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 1.
Nov 23 15:48:41 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:48:41 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.990s CPU time.
Nov 23 15:48:41 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:48:41 np0005532762 podman[112654]: 2025-11-23 20:48:41.519931699 +0000 UTC m=+0.036703251 container create 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:48:41 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:41 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:41 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:41 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:48:41 np0005532762 podman[112654]: 2025-11-23 20:48:41.575946444 +0000 UTC m=+0.092718006 container init 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 15:48:41 np0005532762 podman[112654]: 2025-11-23 20:48:41.582638852 +0000 UTC m=+0.099410404 container start 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 23 15:48:41 np0005532762 bash[112654]: 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22
Nov 23 15:48:41 np0005532762 podman[112654]: 2025-11-23 20:48:41.502465481 +0000 UTC m=+0.019237063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:48:41 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:48:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:48:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:41.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:43 np0005532762 systemd-logind[793]: New session 45 of user zuul.
Nov 23 15:48:43 np0005532762 systemd[1]: Started Session 45 of User zuul.
Nov 23 15:48:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:48:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:43.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:48:44 np0005532762 python3.9[112867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.430532) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925430710, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2105, "num_deletes": 251, "total_data_size": 6065391, "memory_usage": 6152072, "flush_reason": "Manual Compaction"}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925450225, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2473480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10686, "largest_seqno": 12786, "table_properties": {"data_size": 2467201, "index_size": 3222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15622, "raw_average_key_size": 20, "raw_value_size": 2453502, "raw_average_value_size": 3178, "num_data_blocks": 143, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930734, "oldest_key_time": 1763930734, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19729 microseconds, and 5682 cpu microseconds.
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.450267) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2473480 bytes OK
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.450283) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451660) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451673) EVENT_LOG_v1 {"time_micros": 1763930925451669, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.451689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6056177, prev total WAL file size 6056177, number of live WAL files 2.
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.452828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2415KB)], [21(13MB)]
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925452889, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16381549, "oldest_snapshot_seqno": -1}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4415 keys, 14652851 bytes, temperature: kUnknown
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925675498, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14652851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14619139, "index_size": 21570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111438, "raw_average_key_size": 25, "raw_value_size": 14534352, "raw_average_value_size": 3292, "num_data_blocks": 926, "num_entries": 4415, "num_filter_entries": 4415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.675715) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14652851 bytes
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.678426) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.6 rd, 65.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 13.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.9) OK, records in: 4848, records dropped: 433 output_compression: NoCompression
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.678444) EVENT_LOG_v1 {"time_micros": 1763930925678435, "job": 10, "event": "compaction_finished", "compaction_time_micros": 222677, "compaction_time_cpu_micros": 38694, "output_level": 6, "num_output_files": 1, "total_output_size": 14652851, "num_input_records": 4848, "num_output_records": 4415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925678884, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930925681098, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.452765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:48:45.681185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:48:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:45.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:46 np0005532762 python3.9[113026]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:48:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:48:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:48:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:48 np0005532762 python3.9[113181]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:48:49 np0005532762 python3.9[113334]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:48:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:50 np0005532762 python3.9[113488]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:48:51 np0005532762 python3.9[113640]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:48:51 np0005532762 systemd[1]: session-45.scope: Deactivated successfully.
Nov 23 15:48:51 np0005532762 systemd[1]: session-45.scope: Consumed 3.859s CPU time.
Nov 23 15:48:51 np0005532762 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Nov 23 15:48:51 np0005532762 systemd-logind[793]: Removed session 45.
Nov 23 15:48:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:48:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:48:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:48:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204855 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:48:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:56 np0005532762 systemd-logind[793]: New session 46 of user zuul.
Nov 23 15:48:56 np0005532762 systemd[1]: Started Session 46 of User zuul.
Nov 23 15:48:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:48:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:57 np0005532762 python3.9[113863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:48:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:48:58.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:59 np0005532762 python3.9[114019]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:48:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:48:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:48:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:48:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:48:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:48:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:48:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:00 np0005532762 python3.9[114104]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 15:49:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:02.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:02 np0005532762 python3.9[114256]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:49:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:03 np0005532762 python3.9[114408]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:49:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:04.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:04 np0005532762 python3.9[114640]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:49:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:49:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:49:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:05 np0005532762 python3.9[114790]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:49:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:05.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:06 np0005532762 systemd[1]: session-46.scope: Deactivated successfully.
Nov 23 15:49:06 np0005532762 systemd[1]: session-46.scope: Consumed 5.617s CPU time.
Nov 23 15:49:06 np0005532762 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Nov 23 15:49:06 np0005532762 systemd-logind[793]: Removed session 46.
Nov 23 15:49:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204907 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:49:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.845378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949845446, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 511, "num_deletes": 251, "total_data_size": 771857, "memory_usage": 783000, "flush_reason": "Manual Compaction"}
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949854419, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 509525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12791, "largest_seqno": 13297, "table_properties": {"data_size": 506826, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6430, "raw_average_key_size": 18, "raw_value_size": 501364, "raw_average_value_size": 1440, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930926, "oldest_key_time": 1763930926, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9058 microseconds, and 1829 cpu microseconds.
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854449) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 509525 bytes OK
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.854463) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855556) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855576) EVENT_LOG_v1 {"time_micros": 1763930949855571, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.855594) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 768833, prev total WAL file size 768833, number of live WAL files 2.
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.856155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(497KB)], [24(13MB)]
Nov 23 15:49:09 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930949856191, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 15162376, "oldest_snapshot_seqno": -1}
Nov 23 15:49:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:10.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4248 keys, 13435255 bytes, temperature: kUnknown
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950040699, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13435255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13404303, "index_size": 19258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108882, "raw_average_key_size": 25, "raw_value_size": 13324036, "raw_average_value_size": 3136, "num_data_blocks": 815, "num_entries": 4248, "num_filter_entries": 4248, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763930949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.040954) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13435255 bytes
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.042238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.1 rd, 72.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(56.1) write-amplify(26.4) OK, records in: 4763, records dropped: 515 output_compression: NoCompression
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.042253) EVENT_LOG_v1 {"time_micros": 1763930950042246, "job": 12, "event": "compaction_finished", "compaction_time_micros": 184597, "compaction_time_cpu_micros": 27093, "output_level": 6, "num_output_files": 1, "total_output_size": 13435255, "num_input_records": 4763, "num_output_records": 4248, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950042405, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763930950045096, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:09.856073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:10 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:49:10.045133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:49:11 np0005532762 systemd-logind[793]: New session 47 of user zuul.
Nov 23 15:49:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:11 np0005532762 systemd[1]: Started Session 47 of User zuul.
Nov 23 15:49:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:12.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:12 np0005532762 python3.9[114999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:49:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:14.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:14 np0005532762 python3.9[115156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:14 np0005532762 python3.9[115308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:15 np0005532762 python3.9[115461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:49:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:16.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:16 np0005532762 python3.9[115584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930955.1299405-155-157013120365244/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=c6695f794168bb06a68458e4c4302f75682e8d66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:16 np0005532762 python3.9[115736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:17 np0005532762 python3.9[115859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930956.5145817-155-107003026360121/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=837e8dcdbcb3ca01e6b5360b86e6942411e1cc1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:18 np0005532762 python3.9[116037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:18 np0005532762 python3.9[116160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930957.5980859-155-269360365312086/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=b07f2c98942f4a42e88a4fd6c2dfd6797a26d65b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:18 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:49:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:18 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:49:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:19 np0005532762 python3.9[116314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:20.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:20 np0005532762 python3.9[116467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:20 np0005532762 python3.9[116619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:21 np0005532762 python3.9[116742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930960.4042263-323-251517065325017/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=5573c0dbfa105202cd0bc263e2740c0ee40f10d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:49:21 np0005532762 python3.9[116895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:22.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:22 np0005532762 python3.9[117018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930961.5185506-323-85067431312601/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:23 np0005532762 python3.9[117170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:23.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:23 np0005532762 python3.9[117294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930962.647661-323-50759445602601/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=da68e833cddbc2fb38a5a85f757ef73f04436e47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:24.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:24 np0005532762 python3.9[117446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:25 np0005532762 python3.9[117598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:25 np0005532762 python3.9[117752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:26 np0005532762 python3.9[117875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930965.4880726-484-54760953798451/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=e255433b2b130bb49d47746bfb39bf4444637eba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:27 np0005532762 python3.9[118027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204927 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:49:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:27 np0005532762 python3.9[118151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930966.7242484-484-40941470378252/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=26cfebde0335fa79ed2e9639d0ee86f73b64ddb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000051s ======
Nov 23 15:49:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:27.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 23 15:49:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:28 np0005532762 python3.9[118305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:29 np0005532762 python3.9[118428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930967.8974254-484-97027044703044/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=dc8cfd8f437b6e825d312f4878d06173fdcec8c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:30 np0005532762 python3.9[118581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:30 np0005532762 python3.9[118733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:31 np0005532762 python3.9[118856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930970.4093323-672-257334711054345/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:31.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:32 np0005532762 python3.9[119009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:32 np0005532762 python3.9[119161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:33 np0005532762 python3.9[119284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930972.3864365-748-51528642073617/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24480016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:33.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:34 np0005532762 python3.9[119437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:34 np0005532762 python3.9[119589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:35 np0005532762 python3.9[119712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930974.3081045-818-34005275327308/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:36 np0005532762 python3.9[119865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:36 np0005532762 python3.9[120017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:37 np0005532762 python3.9[120140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930976.3147132-887-2070853817079/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:37.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:38 np0005532762 python3.9[120318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:38 np0005532762 python3.9[120470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:39 np0005532762 python3.9[120593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930978.1828322-954-67957460539938/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:40 np0005532762 python3.9[120746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:49:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:40 np0005532762 python3.9[120898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:41 np0005532762 python3.9[121021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763930980.197119-1023-110698636821872/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=848940549ac5db80ec615963c7c09743939a62fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:47 np0005532762 systemd[1]: session-47.scope: Deactivated successfully.
Nov 23 15:49:47 np0005532762 systemd[1]: session-47.scope: Consumed 21.852s CPU time.
Nov 23 15:49:47 np0005532762 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Nov 23 15:49:47 np0005532762 systemd-logind[793]: Removed session 47.
Nov 23 15:49:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:49.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/204951 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:49:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:49:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:51.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:49:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:52.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:52 np0005532762 systemd-logind[793]: New session 48 of user zuul.
Nov 23 15:49:52 np0005532762 systemd[1]: Started Session 48 of User zuul.
Nov 23 15:49:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:53 np0005532762 python3.9[121209]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:54 np0005532762 python3.9[121361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:55 np0005532762 python3.9[121484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930994.0915732-63-267165231247700/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=756e8313f47ae598921d0392828cdc60f53012e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:56 np0005532762 python3.9[121637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:49:56 np0005532762 python3.9[121762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763930995.696197-63-154300165805421/.source.conf _original_basename=ceph.conf follow=False checksum=d92b20e9a86369ec384ba170ca716bfc5aeaba51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:49:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:57 np0005532762 systemd[1]: session-48.scope: Deactivated successfully.
Nov 23 15:49:57 np0005532762 systemd[1]: session-48.scope: Consumed 2.768s CPU time.
Nov 23 15:49:57 np0005532762 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Nov 23 15:49:57 np0005532762 systemd-logind[793]: Removed session 48.
Nov 23 15:49:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:49:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:57.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:49:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:49:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:49:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:49:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:49:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:49:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:49:59.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:49:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:49:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:50:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:00.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:00 np0005532762 ceph-mon[80135]: overall HEALTH_OK
Nov 23 15:50:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:02 np0005532762 systemd-logind[793]: New session 49 of user zuul.
Nov 23 15:50:02 np0005532762 systemd[1]: Started Session 49 of User zuul.
Nov 23 15:50:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:50:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:50:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:50:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:03 np0005532762 python3.9[121969]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:04 np0005532762 python3.9[122126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:05 np0005532762 python3.9[122278]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:50:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:06.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:06 np0005532762 python3.9[122429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:07 np0005532762 python3.9[122583]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 15:50:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:08.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:09 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 23 15:50:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:10 np0005532762 python3.9[122864]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:50:11 np0005532762 python3.9[122980]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:50:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:11.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205013 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:50:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:13 np0005532762 python3.9[123134]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:50:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:14.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:14 np0005532762 python3[123290]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 23 15:50:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:15 np0005532762 python3.9[123442]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:16.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:16 np0005532762 python3.9[123597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:17 np0005532762 python3.9[123700]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:50:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:17 np0005532762 python3.9[123853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:18.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:18 np0005532762 python3.9[123956]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x68dw7a8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:19 np0005532762 python3.9[124108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480009770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:19 np0005532762 python3.9[124187]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:20 np0005532762 python3.9[124339]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:21 np0005532762 python3[124494]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:50:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:22 np0005532762 python3.9[124646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:23 np0005532762 python3.9[124771]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931021.9635844-432-12731109785803/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:24 np0005532762 python3.9[124924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:24 np0005532762 python3.9[125049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931023.5975683-477-266356286050086/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:25 np0005532762 python3.9[125202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:26 np0005532762 python3.9[125327]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931025.238438-522-198293900914498/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:27 np0005532762 python3.9[125479]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:27 np0005532762 python3.9[125605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931026.8137777-567-189985884401335/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:28 np0005532762 python3.9[125757]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:29 np0005532762 python3.9[125883]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931028.3467958-612-47770686636605/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:30.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:30 np0005532762 python3.9[126037]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:31 np0005532762 python3.9[126189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:31.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:32.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:32 np0005532762 python3.9[126345]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:33 np0005532762 python3.9[126497]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:33.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:34 np0005532762 python3.9[126651]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:50:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:50:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 21.45 MB, 0.04 MB/s#012Interval WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 15:50:35 np0005532762 python3.9[126805]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:36 np0005532762 python3.9[126961]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:36.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:37 np0005532762 python3.9[127111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:50:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:38.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:39 np0005532762 python3.9[127290]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:39 np0005532762 ovs-vsctl[127292]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 23 15:50:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:39 np0005532762 python3.9[127445]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:40.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:40 np0005532762 python3.9[127600]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:50:40 np0005532762 ovs-vsctl[127601]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 23 15:50:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680014b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:41 np0005532762 python3.9[127752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:50:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 23 15:50:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:42.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 23 15:50:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:42.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:42 np0005532762 python3.9[127906]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:43 np0005532762 python3.9[128059]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:44.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:44 np0005532762 python3.9[128137]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:44.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:44 np0005532762 python3.9[128289]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:45 np0005532762 python3.9[128367]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:46 np0005532762 python3.9[128520]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:47 np0005532762 python3.9[128672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:47 np0005532762 python3.9[128751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:48 np0005532762 python3.9[128903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:49 np0005532762 python3.9[128981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:50.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:50 np0005532762 python3.9[129134]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:50:50 np0005532762 systemd[1]: Reloading.
Nov 23 15:50:50 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:50:50 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:50:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:51 np0005532762 python3.9[129324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24680038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:52 np0005532762 python3.9[129403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:52 np0005532762 python3.9[129555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:53 np0005532762 python3.9[129633]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:50:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:50:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:50:54 np0005532762 python3.9[129787]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:50:54 np0005532762 systemd[1]: Reloading.
Nov 23 15:50:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:54 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:50:54 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:50:54 np0005532762 systemd[1]: Starting Create netns directory...
Nov 23 15:50:54 np0005532762 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:50:54 np0005532762 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:50:54 np0005532762 systemd[1]: Finished Create netns directory.
Nov 23 15:50:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:55 np0005532762 python3.9[129984]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:56.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:56.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:56 np0005532762 python3.9[130136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:57 np0005532762 python3.9[130259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931056.2854304-1365-222984569254372/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:50:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:50:58.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:50:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:50:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:50:58.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:50:58 np0005532762 python3.9[130412]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:50:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:59 np0005532762 python3.9[130589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:50:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:50:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:50:59 np0005532762 python3.9[130713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931058.85419-1440-142060403416495/.source.json _original_basename=.dtpuaz6g follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:00 np0005532762 python3.9[130866]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740043d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:02.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:03 np0005532762 python3.9[131294]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 23 15:51:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740043f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:04 np0005532762 python3.9[131447]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:51:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:05 np0005532762 python3.9[131600]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:51:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 15:51:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2470 writes, 14K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2470 writes, 2470 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2470 writes, 14K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 38.81 MB, 0.06 MB/s#012Interval WAL: 2470 writes, 2470 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.5      0.43              0.05         6    0.071       0      0       0.0       0.0#012  L6      1/0   12.81 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9     80.0     70.3      0.90              0.16         5    0.180     21K   2261       0.0       0.0#012 Sum      1/0   12.81 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     54.3     64.0      1.33              0.21        11    0.120     21K   2261       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     54.4     64.1      1.32              0.21        10    0.132     21K   2261       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     80.0     70.3      0.90              0.16         5    0.180     21K   2261       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.7      0.42              0.05         5    0.085       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.3 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 2.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(170,2.33 MB,0.765188%) FilterBlock(11,69.42 KB,0.0223009%) IndexBlock(11,138.52 KB,0.0444964%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 15:51:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:07 np0005532762 python3[131780]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:51:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:08.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:08.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:10.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:10.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004450 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:12.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:14.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:14.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:16.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:16.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2454004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:18.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:19 np0005532762 podman[131793]: 2025-11-23 20:51:19.888918858 +0000 UTC m=+11.967020025 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740044b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:20 np0005532762 podman[132026]: 2025-11-23 20:51:20.000535723 +0000 UTC m=+0.020889910 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:20 np0005532762 podman[132026]: 2025-11-23 20:51:20.120469888 +0000 UTC m=+0.140824065 container create 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 15:51:20 np0005532762 python3[131780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 23 15:51:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:21 np0005532762 python3.9[132216]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:51:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:51:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:22 np0005532762 python3.9[132371]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:22 np0005532762 python3.9[132447]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:51:22 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:22 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:22 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:51:23 np0005532762 python3.9[132598]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931082.6954741-1704-28837890510092/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:51:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:23 np0005532762 python3.9[132675]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:51:23 np0005532762 systemd[1]: Reloading.
Nov 23 15:51:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:23 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:23 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:24 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:24 np0005532762 python3.9[132787]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:51:24 np0005532762 systemd[1]: Reloading.
Nov 23 15:51:25 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:25 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:25 np0005532762 systemd[1]: Starting ovn_controller container...
Nov 23 15:51:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740044f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:25 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:51:25 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24963beabbf068cbcc5810ef578cb753310562df52d20741745cffaa9d82c286/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 15:51:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:26.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:26 np0005532762 systemd[1]: Started /usr/bin/podman healthcheck run 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87.
Nov 23 15:51:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:26.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:27 np0005532762 podman[132828]: 2025-11-23 20:51:27.063264131 +0000 UTC m=+1.761016237 container init 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + sudo -E kolla_set_configs
Nov 23 15:51:27 np0005532762 podman[132828]: 2025-11-23 20:51:27.092641954 +0000 UTC m=+1.790394040 container start 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 15:51:27 np0005532762 systemd[1]: Created slice User Slice of UID 0.
Nov 23 15:51:27 np0005532762 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 15:51:27 np0005532762 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 15:51:27 np0005532762 systemd[1]: Starting User Manager for UID 0...
Nov 23 15:51:27 np0005532762 edpm-start-podman-container[132828]: ovn_controller
Nov 23 15:51:27 np0005532762 edpm-start-podman-container[132827]: Creating additional drop-in dependency for "ovn_controller" (5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87)
Nov 23 15:51:27 np0005532762 podman[132854]: 2025-11-23 20:51:27.217267612 +0000 UTC m=+0.113679392 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 15:51:27 np0005532762 systemd[1]: 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87-1b473563f9c00050.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 15:51:27 np0005532762 systemd[1]: 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87-1b473563f9c00050.service: Failed with result 'exit-code'.
Nov 23 15:51:27 np0005532762 systemd[1]: Reloading.
Nov 23 15:51:27 np0005532762 systemd[132867]: Queued start job for default target Main User Target.
Nov 23 15:51:27 np0005532762 systemd[132867]: Created slice User Application Slice.
Nov 23 15:51:27 np0005532762 systemd[132867]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 15:51:27 np0005532762 systemd[132867]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 15:51:27 np0005532762 systemd[132867]: Reached target Paths.
Nov 23 15:51:27 np0005532762 systemd[132867]: Reached target Timers.
Nov 23 15:51:27 np0005532762 systemd[132867]: Starting D-Bus User Message Bus Socket...
Nov 23 15:51:27 np0005532762 systemd[132867]: Starting Create User's Volatile Files and Directories...
Nov 23 15:51:27 np0005532762 systemd[132867]: Finished Create User's Volatile Files and Directories.
Nov 23 15:51:27 np0005532762 systemd[132867]: Listening on D-Bus User Message Bus Socket.
Nov 23 15:51:27 np0005532762 systemd[132867]: Reached target Sockets.
Nov 23 15:51:27 np0005532762 systemd[132867]: Reached target Basic System.
Nov 23 15:51:27 np0005532762 systemd[132867]: Reached target Main User Target.
Nov 23 15:51:27 np0005532762 systemd[132867]: Startup finished in 120ms.
Nov 23 15:51:27 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:51:27 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:51:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:27 np0005532762 systemd[1]: Started User Manager for UID 0.
Nov 23 15:51:27 np0005532762 systemd[1]: Started ovn_controller container.
Nov 23 15:51:27 np0005532762 systemd[1]: Started Session c1 of User root.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: INFO:__main__:Validating config file
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: INFO:__main__:Writing out command to execute
Nov 23 15:51:27 np0005532762 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: ++ cat /run_command
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + ARGS=
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + sudo kolla_copy_cacerts
Nov 23 15:51:27 np0005532762 systemd[1]: Started Session c2 of User root.
Nov 23 15:51:27 np0005532762 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + [[ ! -n '' ]]
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + . kolla_extend_start
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + umask 0022
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 15:51:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8074] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8088] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8113] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 23 15:51:27 np0005532762 kernel: br-int: entered promiscuous mode
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8124] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8146] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00015|main|INFO|OVS feature set changed, force recompute.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:27 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:27Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8346] manager: (ovn-10e3bf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8353] manager: (ovn-6de892-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 23 15:51:27 np0005532762 systemd-udevd[132980]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 15:51:27 np0005532762 kernel: genev_sys_6081: entered promiscuous mode
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8548] device (genev_sys_6081): carrier: link connected
Nov 23 15:51:27 np0005532762 NetworkManager[49021]: <info>  [1763931087.8551] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Nov 23 15:51:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:28 np0005532762 NetworkManager[49021]: <info>  [1763931088.5465] manager: (ovn-fa015a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 23 15:51:29 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:29 np0005532762 python3.9[133111]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:29 np0005532762 ovs-vsctl[133112]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 23 15:51:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24540040c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:30 np0005532762 python3.9[133265]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:30 np0005532762 ovs-vsctl[133267]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 23 15:51:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:31 np0005532762 python3.9[133420]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:51:31 np0005532762 ovs-vsctl[133421]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 23 15:51:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:31 np0005532762 systemd[1]: session-49.scope: Deactivated successfully.
Nov 23 15:51:31 np0005532762 systemd[1]: session-49.scope: Consumed 54.980s CPU time.
Nov 23 15:51:31 np0005532762 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Nov 23 15:51:31 np0005532762 systemd-logind[793]: Removed session 49.
Nov 23 15:51:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:33 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:34 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:35 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:36.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:37 np0005532762 systemd[1]: Stopping User Manager for UID 0...
Nov 23 15:51:37 np0005532762 systemd[132867]: Activating special unit Exit the Session...
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped target Main User Target.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped target Basic System.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped target Paths.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped target Sockets.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped target Timers.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 15:51:37 np0005532762 systemd[132867]: Closed D-Bus User Message Bus Socket.
Nov 23 15:51:37 np0005532762 systemd[132867]: Stopped Create User's Volatile Files and Directories.
Nov 23 15:51:37 np0005532762 systemd[132867]: Removed slice User Application Slice.
Nov 23 15:51:37 np0005532762 systemd[132867]: Reached target Shutdown.
Nov 23 15:51:37 np0005532762 systemd[132867]: Finished Exit the Session.
Nov 23 15:51:37 np0005532762 systemd[132867]: Reached target Exit the Session.
Nov 23 15:51:37 np0005532762 systemd[1]: user@0.service: Deactivated successfully.
Nov 23 15:51:37 np0005532762 systemd[1]: Stopped User Manager for UID 0.
Nov 23 15:51:37 np0005532762 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 15:51:37 np0005532762 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 15:51:37 np0005532762 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 15:51:37 np0005532762 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 15:51:37 np0005532762 systemd[1]: Removed slice User Slice of UID 0.
Nov 23 15:51:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:37 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:38 np0005532762 systemd-logind[793]: New session 51 of user zuul.
Nov 23 15:51:38 np0005532762 systemd[1]: Started Session 51 of User zuul.
Nov 23 15:51:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:38.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:51:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:39 np0005532762 python3.9[133656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:51:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:39 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:40.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:40 np0005532762 python3.9[133813]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:41 np0005532762 python3.9[133965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800022e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:41 np0005532762 python3.9[134118]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:41 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:42.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:42 np0005532762 python3.9[134270]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:43 np0005532762 python3.9[134422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:43 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:44 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:44.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:44 np0005532762 python3.9[134573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:51:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:45 np0005532762 python3.9[134726]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 15:51:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:45 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:46.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f245c001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205147 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:51:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:47 np0005532762 python3.9[134878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:47 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:48.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:48 np0005532762 python3.9[135001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931107.2444031-219-244349140641201/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:49 np0005532762 python3.9[135152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:49 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:49 np0005532762 python3.9[135274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931108.8271143-264-108775523783921/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:51:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:51:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:51 np0005532762 python3.9[135426]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:51:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:51 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:52 np0005532762 python3.9[135511]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:51:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:51:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:51:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2480002380 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:53 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:54 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:54 np0005532762 python3.9[135667]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:51:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:55 np0005532762 python3.9[135820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:55 np0005532762 python3.9[135942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931114.8181174-375-143028732306432/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:55 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24800023a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:56 np0005532762 python3.9[136092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:56 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:51:57 np0005532762 python3.9[136213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931115.9870987-375-133835235041949/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:57 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:57Z|00025|memory|INFO|16768 kB peak resident set size after 29.9 seconds
Nov 23 15:51:57 np0005532762 ovn_controller[132845]: 2025-11-23T20:51:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 23 15:51:57 np0005532762 podman[136239]: 2025-11-23 20:51:57.748197466 +0000 UTC m=+0.166164172 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 15:51:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:57 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:51:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:51:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:51:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:51:58.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:51:58 np0005532762 python3.9[136389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:59 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:51:59 np0005532762 python3.9[136535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931118.254356-507-42289708055651/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:59 np0005532762 python3.9[136686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:51:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:51:59 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:52:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:00 np0005532762 python3.9[136807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931119.3901117-507-152641601991289/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:01 np0005532762 python3.9[136957]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:01 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:02 np0005532762 python3.9[137112]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:02 : epoch 69237329 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:52:03 np0005532762 python3.9[137264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:03 np0005532762 python3.9[137343]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:03 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:04 np0005532762 python3.9[137495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:05 np0005532762 python3.9[137573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:05 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:06 np0005532762 python3.9[137726]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:07 np0005532762 python3.9[137878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:07 np0005532762 python3.9[137956]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:07 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24740048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:08.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:08 np0005532762 python3.9[138109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:08.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:09 np0005532762 python3.9[138187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205209 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:52:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:09 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:10 np0005532762 python3.9[138340]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:10 np0005532762 systemd[1]: Reloading.
Nov 23 15:52:10 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:10 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:11 np0005532762 python3.9[138530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:11 np0005532762 python3.9[138609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:11 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:12 np0005532762 python3.9[138761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:13 np0005532762 python3.9[138839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:13 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:14 np0005532762 python3.9[138992]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:14 np0005532762 systemd[1]: Reloading.
Nov 23 15:52:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:14 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:14 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:14 np0005532762 systemd[1]: Starting Create netns directory...
Nov 23 15:52:14 np0005532762 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:52:14 np0005532762 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:52:14 np0005532762 systemd[1]: Finished Create netns directory.
Nov 23 15:52:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:15 np0005532762 python3.9[139185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:15 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:16 np0005532762 python3.9[139337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:17 np0005532762 python3.9[139460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931136.0811546-960-57152904868133/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2468002970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:17 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:18 np0005532762 python3.9[139613]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:52:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:19 np0005532762 python3.9[139771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:52:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:19 np0005532762 python3.9[139913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931138.6952834-1035-43771545588682/.source.json _original_basename=.0r_1a0mz follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:19 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:20.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:20 np0005532762 python3.9[140068]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:21 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:22.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:52:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:52:23 np0005532762 python3.9[140496]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 23 15:52:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:23 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:24 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:24 np0005532762 python3.9[140649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:52:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:25 np0005532762 python3.9[140801]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:52:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24500036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448001070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:25 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:27 np0005532762 python3[140981]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:52:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:27 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:28 np0005532762 podman[141028]: 2025-11-23 20:52:28.687659307 +0000 UTC m=+0.097834312 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 15:52:29 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:29 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2450003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2448003240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f248000aa90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:52:31 np0005532762 kernel: ganesha.nfsd[113671]: segfault at 50 ip 00007f25294d132e sp 00007f24ebffe210 error 4 in libntirpc.so.5.8[7f25294b6000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:52:31 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:52:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[112670]: 23/11/2025 20:52:31 : epoch 69237329 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2474004990 fd 38 proxy ignored for local
Nov 23 15:52:31 np0005532762 systemd[1]: Started Process Core Dump (PID 141089/UID 0).
Nov 23 15:52:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:32.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:34.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:35 np0005532762 systemd-coredump[141090]: Process 112675 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f25294d132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:52:35 np0005532762 systemd[1]: systemd-coredump@1-141089-0.service: Deactivated successfully.
Nov 23 15:52:35 np0005532762 systemd[1]: systemd-coredump@1-141089-0.service: Consumed 1.171s CPU time.
Nov 23 15:52:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:39 np0005532762 podman[141097]: 2025-11-23 20:52:39.358436236 +0000 UTC m=+3.501685438 container died 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:52:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205239 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:52:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:43 np0005532762 systemd[1]: var-lib-containers-storage-overlay-36431d4c51e2d3482a2149cb2663510026d0fcb8438692ee02935721d35a5258-merged.mount: Deactivated successfully.
Nov 23 15:52:44 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:52:44 np0005532762 podman[141097]: 2025-11-23 20:52:44.108531885 +0000 UTC m=+8.251781077 container remove 9cce1bf66affa6ef4f347207d4a0ad972590fbbe226e35c4c7f83bf8a6579c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:52:44 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:52:44 np0005532762 podman[140994]: 2025-11-23 20:52:44.155168365 +0000 UTC m=+16.750071148 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:44 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:52:44 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.780s CPU time.
Nov 23 15:52:44 np0005532762 podman[141311]: 2025-11-23 20:52:44.320933453 +0000 UTC m=+0.058962640 container create ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 23 15:52:44 np0005532762 podman[141311]: 2025-11-23 20:52:44.288410983 +0000 UTC m=+0.026440190 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:44 np0005532762 python3[140981]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 15:52:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:44.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:44 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 23 15:52:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:52:45 np0005532762 python3.9[141499]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:46 np0005532762 python3.9[141654]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:46 np0005532762 python3.9[141730]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:52:47 np0005532762 python3.9[141881]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931166.7258358-1299-71852450716199/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:52:47 np0005532762 python3.9[141958]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:52:47 np0005532762 systemd[1]: Reloading.
Nov 23 15:52:47 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:47 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:52:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:52:48 np0005532762 python3.9[142071]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:52:48 np0005532762 systemd[1]: Reloading.
Nov 23 15:52:48 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:48 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:49 np0005532762 systemd[1]: Starting ovn_metadata_agent container...
Nov 23 15:52:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:52:49 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:52:49 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4aa44a3bea29e4c51158edcd152131bb41d9075b6cc6f242435ec532892ba2/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:49 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4aa44a3bea29e4c51158edcd152131bb41d9075b6cc6f242435ec532892ba2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:49 np0005532762 systemd[1]: Started /usr/bin/podman healthcheck run ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab.
Nov 23 15:52:49 np0005532762 podman[142137]: 2025-11-23 20:52:49.15189735 +0000 UTC m=+0.116449550 container init ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + sudo -E kolla_set_configs
Nov 23 15:52:49 np0005532762 podman[142137]: 2025-11-23 20:52:49.178897893 +0000 UTC m=+0.143450063 container start ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 15:52:49 np0005532762 edpm-start-podman-container[142137]: ovn_metadata_agent
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Validating config file
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Copying service configuration files
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Writing out command to execute
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 15:52:49 np0005532762 edpm-start-podman-container[142136]: Creating additional drop-in dependency for "ovn_metadata_agent" (ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab)
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: ++ cat /run_command
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + CMD=neutron-ovn-metadata-agent
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + ARGS=
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + sudo kolla_copy_cacerts
Nov 23 15:52:49 np0005532762 systemd[1]: Reloading.
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + [[ ! -n '' ]]
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + . kolla_extend_start
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: Running command: 'neutron-ovn-metadata-agent'
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + umask 0022
Nov 23 15:52:49 np0005532762 ovn_metadata_agent[142153]: + exec neutron-ovn-metadata-agent
Nov 23 15:52:49 np0005532762 podman[142160]: 2025-11-23 20:52:49.262603784 +0000 UTC m=+0.074698301 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 23 15:52:49 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:52:49 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:52:49 np0005532762 systemd[1]: Started ovn_metadata_agent container.
Nov 23 15:52:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.985 142158 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.986 142158 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.986 142158 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.987 142158 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.988 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.989 142158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.990 142158 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.991 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.992 142158 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.993 142158 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.994 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.995 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.996 142158 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.997 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.998 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:50 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:50.999 142158 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.000 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.001 142158 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.002 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.003 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.004 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.005 142158 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.006 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.007 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.008 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.009 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.010 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.011 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.012 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.013 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.014 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.015 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.016 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.017 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.018 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.019 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.020 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.021 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.022 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.023 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.024 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.025 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.026 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.027 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.028 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.029 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.030 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.031 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.032 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.033 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.034 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.035 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.036 142158 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.045 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.046 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.046 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.047 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.047 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.062 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d8ff4ac4-2bee-48db-b79e-2466bc4db046 (UUID: d8ff4ac4-2bee-48db-b79e-2466bc4db046) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.098 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.099 142158 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.103 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.110 142158 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.117 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd8ff4ac4-2bee-48db-b79e-2466bc4db046'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], external_ids={}, name=d8ff4ac4-2bee-48db-b79e-2466bc4db046, nb_cfg_timestamp=1763931095829, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.118 142158 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f53b965ef70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.119 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.119 142158 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.120 142158 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.120 142158 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.124 142158 DEBUG oslo_service.service [-] Started child 142266 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.127 142266 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-954591'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.128 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpnjoxuln_/privsep.sock']#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.192 142266 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.195 142266 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.202 142266 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.208 142266 INFO eventlet.wsgi.server [-] (142266) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 23 15:52:51 np0005532762 systemd[1]: session-51.scope: Deactivated successfully.
Nov 23 15:52:51 np0005532762 systemd[1]: session-51.scope: Consumed 52.519s CPU time.
Nov 23 15:52:51 np0005532762 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Nov 23 15:52:51 np0005532762 systemd-logind[793]: Removed session 51.
Nov 23 15:52:51 np0005532762 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.783 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.784 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpnjoxuln_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.685 142272 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.689 142272 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.693 142272 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.693 142272 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142272#033[00m
Nov 23 15:52:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:51.786 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[916c196a-cb37-4057-9df5-a52daf463bf1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.280 142272 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:52:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.821 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[54865a17-05b1-4019-b6c2-f71f50a251b6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.823 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, column=external_ids, values=({'neutron:ovn-metadata-id': '37f8a20d-4d8e-5752-b1f6-ae94c68755e0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.841 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.847 142158 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.848 142158 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.849 142158 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.850 142158 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.851 142158 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.852 142158 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.853 142158 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.854 142158 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.855 142158 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.856 142158 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.857 142158 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.858 142158 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.859 142158 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.860 142158 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.861 142158 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.862 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.863 142158 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.864 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.865 142158 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.866 142158 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.867 142158 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.868 142158 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.869 142158 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.870 142158 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.871 142158 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.872 142158 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.873 142158 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.874 142158 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.875 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.876 142158 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.877 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.878 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.879 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.880 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 15:52:52 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:52:52.881 142158 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 15:52:54 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 2.
Nov 23 15:52:54 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:52:54 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.780s CPU time.
Nov 23 15:52:54 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:52:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:54 np0005532762 podman[142323]: 2025-11-23 20:52:54.508902274 +0000 UTC m=+0.037092244 container create 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 23 15:52:54 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:54 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:54 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:54 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:52:54 np0005532762 podman[142323]: 2025-11-23 20:52:54.563440625 +0000 UTC m=+0.091630595 container init 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 23 15:52:54 np0005532762 podman[142323]: 2025-11-23 20:52:54.575278852 +0000 UTC m=+0.103468822 container start 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:52:54 np0005532762 bash[142323]: 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01
Nov 23 15:52:54 np0005532762 podman[142323]: 2025-11-23 20:52:54.494000096 +0000 UTC m=+0.022190086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:52:54 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:52:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:52:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:52:54 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:52:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:52:56 np0005532762 systemd-logind[793]: New session 52 of user zuul.
Nov 23 15:52:56 np0005532762 systemd[1]: Started Session 52 of User zuul.
Nov 23 15:52:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:57 np0005532762 python3.9[142534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:52:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:52:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:52:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:52:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:52:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:52:58 np0005532762 podman[142691]: 2025-11-23 20:52:58.836507689 +0000 UTC m=+0.076146820 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 15:52:58 np0005532762 python3.9[142692]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:00 np0005532762 python3.9[142910]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:53:00 np0005532762 systemd[1]: Reloading.
Nov 23 15:53:00 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:00 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:53:00 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:53:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:00 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:53:01 np0005532762 python3.9[143096]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:53:01 np0005532762 network[143113]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:53:01 np0005532762 network[143114]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:53:01 np0005532762 network[143115]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:53:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:02.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:02.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205303 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:53:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:04.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:53:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:04 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:05 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 15:53:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205307 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/205307 (4) : haproxy version is 2.3.17-d1c9119
Nov 23 15:53:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [NOTICE] 326/205307 (4) : path to executable is /usr/local/sbin/haproxy
Nov 23 15:53:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [ALERT] 326/205307 (4) : backend 'backend' has no server available!
Nov 23 15:53:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:53:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:08.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:53:08 np0005532762 python3.9[143380]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:09 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:09.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:09 np0005532762 python3.9[143533]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:10 np0005532762 python3.9[143687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:11 np0005532762 python3.9[143840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:11 np0005532762 python3.9[143994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:12.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:12 np0005532762 python3.9[144147]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:13.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:13 np0005532762 python3.9[144300]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:53:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:15 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:17.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:17 np0005532762 python3.9[144471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205317 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:17 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:18 np0005532762 python3.9[144624]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:18 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:18.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:18 np0005532762 python3.9[144776]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:19 np0005532762 python3.9[144928]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:19.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:19 np0005532762 podman[145043]: 2025-11-23 20:53:19.641527188 +0000 UTC m=+0.054556210 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 15:53:19 np0005532762 python3.9[145125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:19 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:20 np0005532762 python3.9[145277]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:21 np0005532762 python3.9[145429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:21 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:22.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:22 np0005532762 python3.9[145582]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:23 np0005532762 python3.9[145734]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:23 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:24 np0005532762 python3.9[145887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:24.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:24 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:53:24 np0005532762 python3.9[146039]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:25.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:25 np0005532762 python3.9[146191]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:25 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:26 np0005532762 python3.9[146344]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:26.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:26 np0005532762 python3.9[146496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:53:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:27 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:28.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:28 np0005532762 python3.9[146649]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:29 np0005532762 podman[146652]: 2025-11-23 20:53:29.131517853 +0000 UTC m=+0.088926598 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 15:53:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:29 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:30 np0005532762 python3.9[146828]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:53:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:30.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:31 np0005532762 python3.9[146980]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:53:31 np0005532762 systemd[1]: Reloading.
Nov 23 15:53:31 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:53:31 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:53:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:31.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205331 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:31 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:32 np0005532762 python3.9[147168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:32 np0005532762 python3.9[147321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:33 np0005532762 python3.9[147474]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:33 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:33 np0005532762 python3.9[147628]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:34 np0005532762 python3.9[147783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:35 np0005532762 python3.9[147936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:35 np0005532762 python3.9[148090]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:53:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:35 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:37.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:53:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:37 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:38.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:39.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:39 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:40 np0005532762 python3.9[148270]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 23 15:53:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:40.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:53:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:40 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:53:41 np0005532762 python3.9[148423]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 15:53:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:41.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:41 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:43.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:43 np0005532762 python3.9[148582]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 15:53:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f80003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:43 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:53:43 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 15:53:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000010:nfs.cephfs.0: -2
Nov 23 15:53:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:53:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:43 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:44 np0005532762 python3.9[148744]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:53:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:45.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:45 np0005532762 python3.9[148828]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:53:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:45 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:47.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:47 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f74000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205349 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:53:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:49.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:49 np0005532762 podman[148923]: 2025-11-23 20:53:49.889576275 +0000 UTC m=+0.051077850 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 15:53:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4fa4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:53:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[142338]: 23/11/2025 20:53:49 : epoch 69237426 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4f88003c60 fd 39 proxy ignored for local
Nov 23 15:53:49 np0005532762 kernel: ganesha.nfsd[144332]: segfault at 50 ip 00007f505110132e sp 00007f50167fb210 error 4 in libntirpc.so.5.8[7f50510e6000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:53:49 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:53:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:53:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:49 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:53:50 np0005532762 systemd[1]: Started Process Core Dump (PID 148943/UID 0).
Nov 23 15:53:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:53:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:53:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:53:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:53:51 np0005532762 systemd-coredump[148944]: Process 142342 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f505110132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:53:51 np0005532762 systemd[1]: systemd-coredump@2-148943-0.service: Deactivated successfully.
Nov 23 15:53:51 np0005532762 systemd[1]: systemd-coredump@2-148943-0.service: Consumed 1.198s CPU time.
Nov 23 15:53:51 np0005532762 podman[148950]: 2025-11-23 20:53:51.331988459 +0000 UTC m=+0.028473826 container died 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:53:51 np0005532762 systemd[1]: var-lib-containers-storage-overlay-cb5a284bd58b76c132ec5c1fe7fa8b99f88ae28b2871ecbfc5ae4312bab65a48-merged.mount: Deactivated successfully.
Nov 23 15:53:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:51.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:51 np0005532762 podman[148950]: 2025-11-23 20:53:51.490224036 +0000 UTC m=+0.186709393 container remove 1ff959a10d68e7580d7be117c171df90d016cecac18e249b4316df720bf9ce01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 23 15:53:51 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:53:51 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:53:51 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.456s CPU time.
Nov 23 15:53:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:55.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:53:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:53:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:53:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:53:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:53:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:57.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:53:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:53:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:53:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:53:59.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:53:59 np0005532762 podman[149220]: 2025-11-23 20:53:59.565579614 +0000 UTC m=+0.079868824 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 15:54:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:00 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:01 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 3.
Nov 23 15:54:01 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:54:01 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.456s CPU time.
Nov 23 15:54:01 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:54:02 np0005532762 podman[149299]: 2025-11-23 20:54:02.071794043 +0000 UTC m=+0.079227608 container create 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 15:54:02 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:02 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:02 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:02 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:02 np0005532762 podman[149299]: 2025-11-23 20:54:02.015763576 +0000 UTC m=+0.023197171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:54:02 np0005532762 podman[149299]: 2025-11-23 20:54:02.125807798 +0000 UTC m=+0.133241383 container init 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:54:02 np0005532762 podman[149299]: 2025-11-23 20:54:02.130493919 +0000 UTC m=+0.137927484 container start 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:54:02 np0005532762 bash[149299]: 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:54:02 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:54:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:02 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:05.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:07.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:08 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:08 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:08.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:09.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205411 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:54:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:13.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:14 np0005532762 kernel: SELinux:  Converting 2772 SID table entries...
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:54:14 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:54:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:14 : epoch 6923746a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:54:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:15.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e80016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:15 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:16.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:17.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7fc001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205417 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:17 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:19.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:19 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 23 15:54:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:19 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:20 np0005532762 podman[149423]: 2025-11-23 20:54:20.638711593 +0000 UTC m=+0.051091475 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 15:54:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:20.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:21 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:22 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:23 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:23 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:24 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:24.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:24 np0005532762 kernel: SELinux:  Converting 2772 SID table entries...
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:54:24 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:54:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:25 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8000091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:25 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:26 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:26.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:27 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8000091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:28 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:29 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7e8002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:29 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb7dc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[149314]: 23/11/2025 20:54:30 : epoch 6923746a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb800009ec0 fd 38 proxy ignored for local
Nov 23 15:54:30 np0005532762 kernel: ganesha.nfsd[149379]: segfault at 50 ip 00007fb8af74032e sp 00007fb87effc210 error 4 in libntirpc.so.5.8[7fb8af725000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 23 15:54:30 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:54:30 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 23 15:54:30 np0005532762 systemd[1]: Started Process Core Dump (PID 149457/UID 0).
Nov 23 15:54:30 np0005532762 podman[149458]: 2025-11-23 20:54:30.175644031 +0000 UTC m=+0.107979231 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 15:54:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:31 np0005532762 systemd-coredump[149459]: Process 149318 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007fb8af74032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:54:31 np0005532762 systemd[1]: systemd-coredump@3-149457-0.service: Deactivated successfully.
Nov 23 15:54:31 np0005532762 podman[149489]: 2025-11-23 20:54:31.303978189 +0000 UTC m=+0.028534907 container died 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:54:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:31.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:31 np0005532762 systemd[1]: var-lib-containers-storage-overlay-b744fd0527429d8e05e010b7cb984d8007475a9014869ff0fbfe766f4a3c302a-merged.mount: Deactivated successfully.
Nov 23 15:54:31 np0005532762 podman[149489]: 2025-11-23 20:54:31.714737156 +0000 UTC m=+0.439293844 container remove 976578fc2e77df5184e890816fcd8c1a37386781e43ef95d07c87ac1de6451e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:54:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:54:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:54:31 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.252s CPU time.
Nov 23 15:54:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205433 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205436 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:54:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:36.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:38.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:39.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:41.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:42 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 4.
Nov 23 15:54:42 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:54:42 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.252s CPU time.
Nov 23 15:54:42 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:54:42 np0005532762 podman[152551]: 2025-11-23 20:54:42.511988988 +0000 UTC m=+0.023003552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:54:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:43 np0005532762 podman[152551]: 2025-11-23 20:54:43.274855001 +0000 UTC m=+0.785869545 container create f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 15:54:43 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:43 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:43 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:43 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:54:43 np0005532762 podman[152551]: 2025-11-23 20:54:43.457650975 +0000 UTC m=+0.968665599 container init f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 23 15:54:43 np0005532762 podman[152551]: 2025-11-23 20:54:43.464294949 +0000 UTC m=+0.975309523 container start f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 15:54:43 np0005532762 bash[152551]: f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f
Nov 23 15:54:43 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:54:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:43.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:54:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:43 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:54:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:45.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:47.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:54:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:54:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:49.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:49 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:54:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:49 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:54:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:50.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.048 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:54:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.049 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:54:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:54:51.049 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:54:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:51.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:51 np0005532762 podman[159000]: 2025-11-23 20:54:51.628712862 +0000 UTC m=+0.043882007 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 15:54:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:53.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:55.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:55 np0005532762 podman[161850]: 2025-11-23 20:54:55.712917656 +0000 UTC m=+0.084034186 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:54:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:55 np0005532762 podman[161850]: 2025-11-23 20:54:55.807348702 +0000 UTC m=+0.178465222 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:54:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:55 : epoch 69237493 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:54:56 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:56 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae18000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:56 np0005532762 podman[162347]: 2025-11-23 20:54:56.248076442 +0000 UTC m=+0.059503594 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:54:56 np0005532762 podman[162347]: 2025-11-23 20:54:56.25948602 +0000 UTC m=+0.070913192 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 15:54:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:56 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:56 np0005532762 podman[162643]: 2025-11-23 20:54:56.563198532 +0000 UTC m=+0.048198830 container exec f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 23 15:54:56 np0005532762 podman[162643]: 2025-11-23 20:54:56.5761716 +0000 UTC m=+0.061171898 container exec_died f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Nov 23 15:54:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:54:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:54:56 np0005532762 podman[162854]: 2025-11-23 20:54:56.769095469 +0000 UTC m=+0.044833172 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:54:56 np0005532762 podman[162854]: 2025-11-23 20:54:56.779135411 +0000 UTC m=+0.054873094 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 15:54:56 np0005532762 podman[163068]: 2025-11-23 20:54:56.965099648 +0000 UTC m=+0.049947316 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Nov 23 15:54:56 np0005532762 podman[163068]: 2025-11-23 20:54:56.976191548 +0000 UTC m=+0.061039206 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Nov 23 15:54:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:57.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:57 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:58 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205458 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:54:58 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:58 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:58 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:54:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:54:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:54:59 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:54:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:54:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:54:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:54:59.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:54:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:54:59 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:00 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:00 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:00 np0005532762 podman[165851]: 2025-11-23 20:55:00.677316236 +0000 UTC m=+0.090665638 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 15:55:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:00.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:00 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:01 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:02 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:02 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:03 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:04 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae00001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:04 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:04.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:55:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:05 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fae08001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:06 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:06 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:07.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[153090]: 23/11/2025 20:55:07 : epoch 69237493 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fadf4002470 fd 39 proxy ignored for local
Nov 23 15:55:07 np0005532762 kernel: ganesha.nfsd[162024]: segfault at 50 ip 00007faec47be32e sp 00007fae88ff8210 error 4 in libntirpc.so.5.8[7faec47a3000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 23 15:55:07 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:55:07 np0005532762 systemd[1]: Started Process Core Dump (PID 167160/UID 0).
Nov 23 15:55:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:12 np0005532762 systemd-coredump[167161]: Process 153135 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 47:#012#0  0x00007faec47be32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:55:12 np0005532762 systemd[1]: systemd-coredump@4-167160-0.service: Deactivated successfully.
Nov 23 15:55:12 np0005532762 systemd[1]: systemd-coredump@4-167160-0.service: Consumed 1.082s CPU time.
Nov 23 15:55:12 np0005532762 podman[167170]: 2025-11-23 20:55:12.119085321 +0000 UTC m=+0.023051982 container died f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 15:55:12 np0005532762 systemd[1]: var-lib-containers-storage-overlay-8ffb40541bf744548711f283b1247279c9fdfd60049305220ba9ccbfd0bc0820-merged.mount: Deactivated successfully.
Nov 23 15:55:12 np0005532762 podman[167170]: 2025-11-23 20:55:12.18180372 +0000 UTC m=+0.085770371 container remove f2f2a3dd8fa50cb909b3976216c43e6961fdaa3c816f16d084b16b89ca08fb7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Nov 23 15:55:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:55:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:55:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.476s CPU time.
Nov 23 15:55:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:15 np0005532762 kernel: SELinux:  Converting 2773 SID table entries...
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability open_perms=1
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability always_check_network=0
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 15:55:15 np0005532762 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 15:55:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:16 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:55:16 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 23 15:55:16 np0005532762 dbus-broker-launch[755]: Noticed file-system modification, trigger reload.
Nov 23 15:55:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:55:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:17.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:55:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205518 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:55:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:22 np0005532762 podman[167497]: 2025-11-23 20:55:22.408765888 +0000 UTC m=+0.060749236 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:55:22 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 5.
Nov 23 15:55:22 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:55:22 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.476s CPU time.
Nov 23 15:55:22 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:55:22 np0005532762 podman[167687]: 2025-11-23 20:55:22.6141167 +0000 UTC m=+0.021265610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:55:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:22.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:22 np0005532762 podman[167687]: 2025-11-23 20:55:22.79324864 +0000 UTC m=+0.200397530 container create d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 23 15:55:22 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:22 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:22 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:22 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:55:22 np0005532762 podman[167687]: 2025-11-23 20:55:22.850373908 +0000 UTC m=+0.257522818 container init d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:55:22 np0005532762 podman[167687]: 2025-11-23 20:55:22.857189211 +0000 UTC m=+0.264338101 container start d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:55:22 np0005532762 bash[167687]: d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:55:22 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:55:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:55:23 np0005532762 systemd[1]: Stopping OpenSSH server daemon...
Nov 23 15:55:23 np0005532762 systemd[1]: sshd.service: Deactivated successfully.
Nov 23 15:55:23 np0005532762 systemd[1]: Stopped OpenSSH server daemon.
Nov 23 15:55:23 np0005532762 systemd[1]: sshd.service: Consumed 7.420s CPU time, read 564.0K from disk, written 316.0K to disk.
Nov 23 15:55:23 np0005532762 systemd[1]: Stopped target sshd-keygen.target.
Nov 23 15:55:23 np0005532762 systemd[1]: Stopping sshd-keygen.target...
Nov 23 15:55:23 np0005532762 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:23 np0005532762 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:23 np0005532762 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 15:55:23 np0005532762 systemd[1]: Reached target sshd-keygen.target.
Nov 23 15:55:23 np0005532762 systemd[1]: Starting OpenSSH server daemon...
Nov 23 15:55:23 np0005532762 systemd[1]: Started OpenSSH server daemon.
Nov 23 15:55:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:25 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:55:25 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:55:25 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:25 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:25 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:25.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:25 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:55:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:55:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:55:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:31 np0005532762 podman[175911]: 2025-11-23 20:55:31.700211881 +0000 UTC m=+0.100402645 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:55:32 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:55:32 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:55:32 np0005532762 systemd[1]: man-db-cache-update.service: Consumed 9.522s CPU time.
Nov 23 15:55:32 np0005532762 systemd[1]: run-r55f65184e2804e1fbdfbc34c4cf18147.service: Deactivated successfully.
Nov 23 15:55:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:32.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:34.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:55:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc0016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:36.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:37.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205538 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:55:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:38.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:39 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:40.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:41.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:41 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:42.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:43 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:44 np0005532762 python3.9[177114]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:44 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:44 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:44 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:45 np0005532762 python3.9[177304]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:45 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:45 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:45 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:45 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:46 np0005532762 python3.9[177495]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:46 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:46 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:46 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:47 np0005532762 python3.9[177685]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:47 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:47 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:47 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:47 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:49 np0005532762 python3.9[177876]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:49 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:49 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:49 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:49 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:50 np0005532762 python3.9[178067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:50 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:50 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:50 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:55:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:55:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:55:51.050 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:55:51 np0005532762 python3.9[178257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:51 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:51 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:51 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:51 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:52 np0005532762 python3.9[178448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:52 np0005532762 podman[178450]: 2025-11-23 20:55:52.616777508 +0000 UTC m=+0.049582396 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 15:55:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:53 np0005532762 python3.9[178622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:53 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:53 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:53 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:53 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:55.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:55 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:56.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:56 np0005532762 python3.9[178813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 15:55:56 np0005532762 systemd[1]: Reloading.
Nov 23 15:55:57 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:55:57 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:55:57 np0005532762 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 23 15:55:57 np0005532762 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 23 15:55:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:55:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:57.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:55:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:57 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:58 np0005532762 python3.9[179006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:55:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:55:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:59 np0005532762 python3.9[179161]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:55:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:55:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:55:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:55:59.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:55:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:55:59 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:55:59 np0005532762 python3.9[179317]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:00 np0005532762 python3.9[179497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:01 np0005532762 python3.9[179652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:01.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:01 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:01 np0005532762 podman[179780]: 2025-11-23 20:56:01.969235294 +0000 UTC m=+0.076518638 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:56:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:02 np0005532762 python3.9[179824]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:02 np0005532762 python3.9[179988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:03.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:03 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:03 np0005532762 python3.9[180143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:04 np0005532762 python3.9[180349]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:05 np0005532762 python3.9[180535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:05 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:06 np0005532762 python3.9[180691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:06 np0005532762 python3.9[180846]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:07 np0005532762 python3.9[181001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:08 np0005532762 python3.9[181159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:56:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:09 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:11 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:12.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:56:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:13.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:13 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:14 np0005532762 python3.9[181342]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:14 np0005532762 python3.9[181494]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:15 np0005532762 python3.9[181646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:15 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:16 np0005532762 python3.9[181799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:17 np0005532762 python3.9[181951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:17 np0005532762 python3.9[182103]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:56:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:17.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:17 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:19 np0005532762 python3.9[182256]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:19 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:20 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:20 np0005532762 python3.9[182382]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931378.7874835-1623-122150935213093/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:20 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:20.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:20 np0005532762 python3.9[182559]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:21 np0005532762 python3.9[182684]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931380.3727849-1623-162860335058016/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:21.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:21 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:22 np0005532762 python3.9[182837]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:22 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:22 np0005532762 python3.9[182962]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931381.5461602-1623-225395930069391/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:23 np0005532762 podman[183086]: 2025-11-23 20:56:23.047618578 +0000 UTC m=+0.051255272 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 15:56:23 np0005532762 python3.9[183134]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:23.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:23 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:23 np0005532762 python3.9[183260]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931382.7664363-1623-120764517474096/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:24 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:24 np0005532762 python3.9[183412]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:24 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:24 np0005532762 python3.9[183537]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931383.9456756-1623-73524656600136/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:25 np0005532762 python3.9[183689]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:25.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:25 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205625 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:56:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:26 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:26 np0005532762 python3.9[183815]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931385.0416245-1623-221571323816832/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:26 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:26 np0005532762 python3.9[183967]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:27 np0005532762 python3.9[184090]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931386.2785227-1623-19103682277062/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:27 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:27 np0005532762 python3.9[184243]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:28 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:28 np0005532762 python3.9[184368]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763931387.408747-1623-211513633385040/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:28 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 15:56:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:29.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 15:56:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:29 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:30 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:30 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:31 np0005532762 python3.9[184521]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 23 15:56:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:31 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb8003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:32 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:32 np0005532762 podman[184647]: 2025-11-23 20:56:32.27170644 +0000 UTC m=+0.096965894 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 23 15:56:32 np0005532762 python3.9[184695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:32 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:33 np0005532762 python3.9[184853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:33 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:33 np0005532762 python3.9[185006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:34 np0005532762 python3.9[185158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:34 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:56:34 np0005532762 python3.9[185310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:35 np0005532762 python3.9[185462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:35.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:35 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:36 np0005532762 python3.9[185615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:36 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:36 np0005532762 python3.9[185767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:37 np0005532762 python3.9[185919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.625706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397625765, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4699, "num_deletes": 502, "total_data_size": 12906832, "memory_usage": 13076424, "flush_reason": "Manual Compaction"}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 23 15:56:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397717581, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8359352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13302, "largest_seqno": 17996, "table_properties": {"data_size": 8341630, "index_size": 11976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8305186, "raw_average_value_size": 4482, "num_data_blocks": 524, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930950, "oldest_key_time": 1763930950, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 91911 microseconds, and 13768 cpu microseconds.
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.717626) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8359352 bytes OK
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.717642) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720272) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720287) EVENT_LOG_v1 {"time_micros": 1763931397720283, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.720302) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12886541, prev total WAL file size 12886541, number of live WAL files 2.
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.722753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8163KB)], [27(12MB)]
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397722794, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21794607, "oldest_snapshot_seqno": -1}
Nov 23 15:56:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:56:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:37 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5079 keys, 15937737 bytes, temperature: kUnknown
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397862123, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15937737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15898966, "index_size": 24965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127060, "raw_average_key_size": 25, "raw_value_size": 15802094, "raw_average_value_size": 3111, "num_data_blocks": 1050, "num_entries": 5079, "num_filter_entries": 5079, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.862512) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15937737 bytes
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.892250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.8 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(4.5) write-amplify(1.9) OK, records in: 6101, records dropped: 1022 output_compression: NoCompression
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.892325) EVENT_LOG_v1 {"time_micros": 1763931397892294, "job": 14, "event": "compaction_finished", "compaction_time_micros": 139434, "compaction_time_cpu_micros": 32364, "output_level": 6, "num_output_files": 1, "total_output_size": 15937737, "num_input_records": 6101, "num_output_records": 5079, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397894975, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931397899026, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.722665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:37 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:56:37.899116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:56:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:38 np0005532762 python3.9[186073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:38 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:38 np0005532762 python3.9[186227]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:39 np0005532762 python3.9[186379]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:39 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:39.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:39 np0005532762 python3.9[186532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:40 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:56:41 np0005532762 python3.9[186709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:41 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:41.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:42 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:43 np0005532762 python3.9[186864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:43 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:43.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:44 np0005532762 python3.9[186987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931402.9988172-2286-243379155914923/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:44 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:44 np0005532762 python3.9[187139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:45 np0005532762 python3.9[187262]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931404.4622972-2286-126981386187817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:45 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc001d70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:45.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:46 np0005532762 python3.9[187417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:46 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:46 np0005532762 python3.9[187540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931405.6262276-2286-226957958230948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:47 np0005532762 python3.9[187692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:47 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:47.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205647 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:56:47 np0005532762 python3.9[187816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931406.8141205-2286-203052722212412/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:48 np0005532762 python3.9[187968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:48 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:48.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:48 np0005532762 python3.9[188093]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931407.9962013-2286-257078694264926/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:49 np0005532762 python3.9[188245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:49 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:49.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:50 np0005532762 python3.9[188369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931409.110169-2286-136955137666298/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:50 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:50 np0005532762 python3.9[188521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.052 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:56:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:56:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:56:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:56:51 np0005532762 auditd[701]: Audit daemon rotating log files
Nov 23 15:56:51 np0005532762 python3.9[188644]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931410.2447076-2286-171263410337530/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:51 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4001ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:51 np0005532762 python3.9[188797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:52 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:52 np0005532762 python3.9[188920]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931411.5123878-2286-46819556889691/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:53 np0005532762 podman[189072]: 2025-11-23 20:56:53.148814903 +0000 UTC m=+0.053708777 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 15:56:53 np0005532762 python3.9[189073]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:53 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:53 np0005532762 python3.9[189216]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931412.7903717-2286-109300093662409/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb40030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:54 np0005532762 python3.9[189368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:54 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:54.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:54 np0005532762 python3.9[189491]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931413.9631433-2286-72832770629622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:55 np0005532762 python3.9[189643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:55 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205655 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:56:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:56 np0005532762 python3.9[189767]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931415.1074858-2286-151123828464600/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:56 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb40030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:56 np0005532762 python3.9[189919]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:56.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:57 np0005532762 python3.9[190042]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931416.2932029-2286-55001929404330/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:57 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:56:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:57.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:56:57 np0005532762 python3.9[190195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:56:58 np0005532762 python3.9[190318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931417.500531-2286-270607796280990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:56:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:58 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:56:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:59 np0005532762 python3.9[190470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:56:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:56:59 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:56:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:56:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:56:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:56:59.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:56:59 np0005532762 python3.9[190594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931418.7044406-2286-136484090450146/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:00 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:00.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:01 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:01.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:02 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:02 np0005532762 podman[190645]: 2025-11-23 20:57:02.663059474 +0000 UTC m=+0.075007816 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 15:57:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:02.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:03 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:03.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:04 np0005532762 python3.9[190797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:04 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:57:05 np0005532762 python3.9[190952]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 23 15:57:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:05 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:05.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:06 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:07 np0005532762 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 23 15:57:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:07.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:57:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:07 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:57:07 np0005532762 python3.9[191110]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ccc003ef0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:08 np0005532762 python3.9[191262]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:08 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:09 np0005532762 python3.9[191414]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:09 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cc0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:09.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:09 np0005532762 python3.9[191567]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:10 np0005532762 python3.9[191721]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:10 : epoch 692374ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:57:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:11 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cdc0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:11.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.511513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432511557, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 546, "num_deletes": 252, "total_data_size": 894695, "memory_usage": 905776, "flush_reason": "Manual Compaction"}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432519750, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 411643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18001, "largest_seqno": 18542, "table_properties": {"data_size": 409057, "index_size": 622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6605, "raw_average_key_size": 19, "raw_value_size": 403893, "raw_average_value_size": 1191, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931398, "oldest_key_time": 1763931398, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 8299 microseconds, and 3610 cpu microseconds.
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.519811) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 411643 bytes OK
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.519838) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521345) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521371) EVENT_LOG_v1 {"time_micros": 1763931432521364, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 891569, prev total WAL file size 891569, number of live WAL files 2.
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.522117) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(401KB)], [30(15MB)]
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432522165, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16349380, "oldest_snapshot_seqno": -1}
Nov 23 15:57:12 np0005532762 python3.9[191876]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:12 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4917 keys, 12435371 bytes, temperature: kUnknown
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432679905, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12435371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12401888, "index_size": 20061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124078, "raw_average_key_size": 25, "raw_value_size": 12312063, "raw_average_value_size": 2503, "num_data_blocks": 836, "num_entries": 4917, "num_filter_entries": 4917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.680109) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12435371 bytes
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.687263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.6 rd, 78.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 15.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(69.9) write-amplify(30.2) OK, records in: 5418, records dropped: 501 output_compression: NoCompression
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.687283) EVENT_LOG_v1 {"time_micros": 1763931432687274, "job": 16, "event": "compaction_finished", "compaction_time_micros": 157794, "compaction_time_cpu_micros": 52624, "output_level": 6, "num_output_files": 1, "total_output_size": 12435371, "num_input_records": 5418, "num_output_records": 4917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432687441, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931432689857, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.521995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:57:12.689942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:57:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:57:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:57:13 np0005532762 python3.9[192078]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:13 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:13 np0005532762 python3.9[192263]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:14 np0005532762 python3.9[192415]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:57:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:57:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:14 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0026d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:14 np0005532762 python3.9[192567]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:15 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:15.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:16 np0005532762 python3.9[192720]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:16 np0005532762 systemd[1]: Reloading.
Nov 23 15:57:16 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:16 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:16 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4ca80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:16 np0005532762 systemd[1]: Starting libvirt logging daemon socket...
Nov 23 15:57:16 np0005532762 systemd[1]: Listening on libvirt logging daemon socket.
Nov 23 15:57:16 np0005532762 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 23 15:57:16 np0005532762 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 23 15:57:16 np0005532762 systemd[1]: Starting libvirt logging daemon...
Nov 23 15:57:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:16 np0005532762 systemd[1]: Started libvirt logging daemon.
Nov 23 15:57:17 np0005532762 python3.9[192913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:17 np0005532762 systemd[1]: Reloading.
Nov 23 15:57:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:17 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cac0020f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:17.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:17 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:17 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:18 np0005532762 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 23 15:57:18 np0005532762 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 23 15:57:18 np0005532762 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 23 15:57:18 np0005532762 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 23 15:57:18 np0005532762 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 23 15:57:18 np0005532762 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 23 15:57:18 np0005532762 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 15:57:18 np0005532762 systemd[1]: Started libvirt nodedev daemon.
Nov 23 15:57:18 np0005532762 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 23 15:57:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[167891]: 23/11/2025 20:57:18 : epoch 692374ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4cb4003db0 fd 38 proxy ignored for local
Nov 23 15:57:18 np0005532762 kernel: ganesha.nfsd[191569]: segfault at 50 ip 00007f4d85dad32e sp 00007f4d3e7fb210 error 4 in libntirpc.so.5.8[7f4d85d92000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:57:18 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:57:18 np0005532762 systemd[1]: Started Process Core Dump (PID 192979/UID 0).
Nov 23 15:57:18 np0005532762 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 23 15:57:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:18 np0005532762 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Nov 23 15:57:18 np0005532762 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 23 15:57:18 np0005532762 python3.9[193159]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:18 np0005532762 systemd[1]: Reloading.
Nov 23 15:57:18 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:18 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:18.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:18 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:57:19 np0005532762 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 23 15:57:19 np0005532762 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 23 15:57:19 np0005532762 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 23 15:57:19 np0005532762 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 23 15:57:19 np0005532762 systemd[1]: Starting libvirt proxy daemon...
Nov 23 15:57:19 np0005532762 systemd[1]: Started libvirt proxy daemon.
Nov 23 15:57:19 np0005532762 systemd-coredump[192980]: Process 167914 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007f4d85dad32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:57:19 np0005532762 systemd[1]: systemd-coredump@5-192979-0.service: Deactivated successfully.
Nov 23 15:57:19 np0005532762 systemd[1]: systemd-coredump@5-192979-0.service: Consumed 1.128s CPU time.
Nov 23 15:57:19 np0005532762 podman[193297]: 2025-11-23 20:57:19.409774106 +0000 UTC m=+0.029928721 container died d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 15:57:19 np0005532762 systemd[1]: var-lib-containers-storage-overlay-dcfbd664ea36e63f9ed359c0f79e6328cde03d445ebf8fde9d5034673736dd60-merged.mount: Deactivated successfully.
Nov 23 15:57:19 np0005532762 podman[193297]: 2025-11-23 20:57:19.476758438 +0000 UTC m=+0.096913033 container remove d38ed78145ce27a698715b902dd179194e031801435ca90af85af498b8f8280c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 23 15:57:19 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:57:19 np0005532762 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 13725f17-2087-483b-8286-d61f28d1887a
Nov 23 15:57:19 np0005532762 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 15:57:19 np0005532762 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 13725f17-2087-483b-8286-d61f28d1887a
Nov 23 15:57:19 np0005532762 setroubleshoot[192978]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 15:57:19 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:57:19 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.522s CPU time.
Nov 23 15:57:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:19.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:19 np0005532762 python3.9[193428]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:19 np0005532762 systemd[1]: Reloading.
Nov 23 15:57:20 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:20 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:20 np0005532762 systemd[1]: Listening on libvirt locking daemon socket.
Nov 23 15:57:20 np0005532762 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 23 15:57:20 np0005532762 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 23 15:57:20 np0005532762 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 23 15:57:20 np0005532762 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 23 15:57:20 np0005532762 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 23 15:57:20 np0005532762 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 23 15:57:20 np0005532762 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 23 15:57:20 np0005532762 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 23 15:57:20 np0005532762 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 23 15:57:20 np0005532762 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 15:57:20 np0005532762 systemd[1]: Started libvirt QEMU daemon.
Nov 23 15:57:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:21 np0005532762 python3.9[193669]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:57:21 np0005532762 systemd[1]: Reloading.
Nov 23 15:57:21 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:57:21 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:57:21 np0005532762 systemd[1]: Starting libvirt secret daemon socket...
Nov 23 15:57:21 np0005532762 systemd[1]: Listening on libvirt secret daemon socket.
Nov 23 15:57:21 np0005532762 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 23 15:57:21 np0005532762 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 23 15:57:21 np0005532762 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 23 15:57:21 np0005532762 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 23 15:57:21 np0005532762 systemd[1]: Starting libvirt secret daemon...
Nov 23 15:57:21 np0005532762 systemd[1]: Started libvirt secret daemon.
Nov 23 15:57:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:21.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:22.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:23 np0005532762 podman[193854]: 2025-11-23 20:57:23.517020706 +0000 UTC m=+0.072502639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 15:57:23 np0005532762 python3.9[193901]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:23.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205724 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:57:24 np0005532762 python3.9[194055]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:57:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:24.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:25 np0005532762 python3.9[194207]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:25.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:26 np0005532762 python3.9[194362]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 15:57:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:27 np0005532762 python3.9[194512]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:27.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:27 np0005532762 python3.9[194634]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931446.9392507-3360-237808396368484/.source.xml follow=False _original_basename=secret.xml.j2 checksum=2095b2efdb764c083af64051baa9ed5d4618fea0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:28 np0005532762 python3.9[194786]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 03808be8-ae4a-5548-82e6-4a294f1bc627#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:29 np0005532762 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 23 15:57:29 np0005532762 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.003s CPU time.
Nov 23 15:57:29 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 6.
Nov 23 15:57:29 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:57:29 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.522s CPU time.
Nov 23 15:57:29 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:57:29 np0005532762 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 23 15:57:29 np0005532762 python3.9[194949]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:29.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:29 np0005532762 podman[195014]: 2025-11-23 20:57:29.878510137 +0000 UTC m=+0.043872438 container create 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 23 15:57:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:29 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:57:29 np0005532762 podman[195014]: 2025-11-23 20:57:29.931612815 +0000 UTC m=+0.096975136 container init 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 23 15:57:29 np0005532762 podman[195014]: 2025-11-23 20:57:29.938817991 +0000 UTC m=+0.104180292 container start 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 15:57:29 np0005532762 bash[195014]: 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8
Nov 23 15:57:29 np0005532762 podman[195014]: 2025-11-23 20:57:29.857443892 +0000 UTC m=+0.022806213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:57:29 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:57:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:57:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:57:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:32 np0005532762 python3.9[195517]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:33 np0005532762 podman[195641]: 2025-11-23 20:57:33.204666415 +0000 UTC m=+0.079189339 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:57:33 np0005532762 python3.9[195687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:33.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:33 np0005532762 python3.9[195819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931452.8606696-3525-209651606109064/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:35 np0005532762 python3.9[195971]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:35.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:35 np0005532762 python3.9[196124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:57:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:57:36 np0005532762 python3.9[196202]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:36.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:37 np0005532762 python3.9[196354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:37 np0005532762 python3.9[196433]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.13jp45ov recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:38 np0005532762 python3.9[196585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:57:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:57:39 np0005532762 python3.9[196663]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:57:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:57:40 np0005532762 python3.9[196816]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:57:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:57:41 np0005532762 python3[196994]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 15:57:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:41.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:41 np0005532762 python3.9[197147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:57:42 np0005532762 python3.9[197236]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b0c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:43 np0005532762 python3.9[197391]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:43 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205743 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:57:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:43.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:43 np0005532762 python3.9[197470]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:45 np0005532762 python3.9[197622]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:45 np0005532762 python3.9[197701]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:45 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:57:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:57:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205746 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:57:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:46 np0005532762 python3.9[197853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:57:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:46.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:57:47 np0005532762 python3.9[197933]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:47 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:47.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:48 np0005532762 python3.9[198086]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:49 np0005532762 python3.9[198211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763931467.7226827-3900-125356492517370/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:49 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:50 np0005532762 python3.9[198364]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:50 np0005532762 python3.9[198516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.053 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:57:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:57:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:57:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:57:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:51 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:51.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:51 np0005532762 python3.9[198672]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:52 np0005532762 python3.9[198824]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:53 np0005532762 podman[198950]: 2025-11-23 20:57:53.658669956 +0000 UTC m=+0.074399819 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:57:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:53 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:53 np0005532762 python3.9[198997]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:57:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:54 np0005532762 python3.9[199151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:57:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:54.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:55 np0005532762 python3.9[199306]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:55 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:56 np0005532762 python3.9[199459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:56.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:57 np0005532762 python3.9[199582]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931475.9867065-4116-33671384725885/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:57 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:58 np0005532762 python3.9[199735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:57:58 np0005532762 python3.9[199858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931477.5665505-4161-236484583150557/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:57:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:57:58.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:57:59 np0005532762 python3.9[200010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:57:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:57:59 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:57:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:57:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:57:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:57:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:00 np0005532762 python3.9[200134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931479.0427454-4207-59424836291676/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:01 np0005532762 python3.9[200311]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:01 np0005532762 systemd[1]: Reloading.
Nov 23 15:58:01 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:01 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:01 np0005532762 systemd[1]: Reached target edpm_libvirt.target.
Nov 23 15:58:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:01 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:01.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:02 np0005532762 python3.9[200503]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 15:58:02 np0005532762 systemd[1]: Reloading.
Nov 23 15:58:02 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:02 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:02 np0005532762 systemd[1]: Reloading.
Nov 23 15:58:02 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:02 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:03 np0005532762 podman[200600]: 2025-11-23 20:58:03.680081453 +0000 UTC m=+0.092956455 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 15:58:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:03 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:03 np0005532762 systemd[1]: session-52.scope: Deactivated successfully.
Nov 23 15:58:03 np0005532762 systemd[1]: session-52.scope: Consumed 3min 14.618s CPU time.
Nov 23 15:58:03 np0005532762 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Nov 23 15:58:03 np0005532762 systemd-logind[793]: Removed session 52.
Nov 23 15:58:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:05 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:07 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:07.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:09 np0005532762 systemd-logind[793]: New session 53 of user zuul.
Nov 23 15:58:09 np0005532762 systemd[1]: Started Session 53 of User zuul.
Nov 23 15:58:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:10 np0005532762 python3.9[200782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:58:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:11 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:12 np0005532762 python3.9[200939]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:58:12 np0005532762 network[200956]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:58:12 np0005532762 network[200957]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:58:12 np0005532762 network[200958]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:58:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b040034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:12.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205813 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:58:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:15 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:17 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:17.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:18 np0005532762 python3.9[201235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 15:58:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:18.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:19 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:19 np0005532762 python3.9[201399]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:58:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:21 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:22.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:23 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:58:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:23 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:24 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:24 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:24 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:58:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:24 np0005532762 podman[201429]: 2025-11-23 20:58:24.634599341 +0000 UTC m=+0.051546557 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:58:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:24.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:25 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:58:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:27 np0005532762 python3.9[201602]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:27 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:28 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:58:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:28 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:28 np0005532762 python3.9[201779]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:58:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:58:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:29 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:29 np0005532762 python3.9[201933]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:30 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:30 np0005532762 python3.9[202085]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:30.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:31 np0005532762 python3.9[202238]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:31 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80037e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:31.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:32 np0005532762 python3.9[202362]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931511.0419207-246-40530868588055/.source.iscsi _original_basename=.yo4xdt8g follow=False checksum=42fe1ad2782de6c869e598a65c6917a7cbe14437 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:58:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:32 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:33 np0005532762 python3.9[202514]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:33 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:33 np0005532762 podman[202639]: 2025-11-23 20:58:33.881852133 +0000 UTC m=+0.122448030 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 15:58:34 np0005532762 python3.9[202684]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:34 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003800 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:34 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:35 np0005532762 python3.9[202845]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:35 np0005532762 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 23 15:58:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:35 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:36 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:36 np0005532762 python3.9[203002]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:58:36 np0005532762 systemd[1]: Reloading.
Nov 23 15:58:36 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:58:36 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:58:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:36.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:37 np0005532762 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 15:58:37 np0005532762 systemd[1]: Starting Open-iSCSI...
Nov 23 15:58:37 np0005532762 kernel: Loading iSCSI transport class v2.0-870.
Nov 23 15:58:37 np0005532762 systemd[1]: Started Open-iSCSI.
Nov 23 15:58:37 np0005532762 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 23 15:58:37 np0005532762 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 23 15:58:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205837 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:58:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:37 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:38 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:38 np0005532762 python3.9[203203]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:58:38 np0005532762 network[203220]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:58:38 np0005532762 network[203221]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:58:38 np0005532762 network[203222]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:58:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:38 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:39 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:40 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:40 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:40.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:41 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af8003860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:42 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:42.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:43 np0005532762 python3.9[203521]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:58:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:43 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:44 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:44.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:45 np0005532762 python3.9[203675]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 23 15:58:45 np0005532762 python3.9[203832]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:45 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:46 np0005532762 python3.9[203956]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931525.345442-477-214486405983401/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:46 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:46.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:47 np0005532762 python3.9[204108]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:47 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:48 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:48 np0005532762 python3.9[204261]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:58:48 np0005532762 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 15:58:48 np0005532762 systemd[1]: Stopped Load Kernel Modules.
Nov 23 15:58:48 np0005532762 systemd[1]: Stopping Load Kernel Modules...
Nov 23 15:58:48 np0005532762 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:58:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:48 np0005532762 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:58:49 np0005532762 python3.9[204418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:58:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:49 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:50 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:50 np0005532762 python3.9[204570]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:50.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.054 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:58:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.055 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:58:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:58:51.055 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:58:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:51 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:52 np0005532762 python3.9[204723]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:58:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:52 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:52 np0005532762 python3.9[204875]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:58:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:58:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:52.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:58:53 np0005532762 python3.9[204998]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931532.3490767-651-218517626441705/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:53 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:54 np0005532762 python3.9[205151]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:58:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:54 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:55 np0005532762 podman[205276]: 2025-11-23 20:58:55.003413125 +0000 UTC m=+0.053017887 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 15:58:55 np0005532762 python3.9[205323]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:55 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:58:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:55.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:58:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:56 np0005532762 python3.9[205477]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:56 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:57 np0005532762 python3.9[205629]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205857 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:58:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:57 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:58 np0005532762 python3.9[205782]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:58 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:58:58 np0005532762 python3.9[205934]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:58:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:59 np0005532762 python3.9[206086]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:58:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:58:59 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:58:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:58:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:58:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:58:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:58:59 np0005532762 python3.9[206239]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:00 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:01 np0005532762 python3.9[206392]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:01 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:02 np0005532762 python3.9[206571]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:02 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:02 np0005532762 python3.9[206723]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:02.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:03 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:03 np0005532762 python3.9[206876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:04 np0005532762 podman[206926]: 2025-11-23 20:59:04.197920779 +0000 UTC m=+0.111567774 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 15:59:04 np0005532762 python3.9[206974]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:04 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:04 np0005532762 python3.9[207132]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:05.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:05 np0005532762 python3.9[207210]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:05 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec0041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:06 np0005532762 python3.9[207363]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:06 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:59:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:07 np0005532762 python3.9[207515]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.746688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547746752, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1302, "num_deletes": 255, "total_data_size": 3217770, "memory_usage": 3273592, "flush_reason": "Manual Compaction"}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547760916, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2105783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18547, "largest_seqno": 19844, "table_properties": {"data_size": 2100198, "index_size": 2977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11231, "raw_average_key_size": 18, "raw_value_size": 2089134, "raw_average_value_size": 3470, "num_data_blocks": 133, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931433, "oldest_key_time": 1763931433, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14256 microseconds, and 6561 cpu microseconds.
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760953) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2105783 bytes OK
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.760971) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762585) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762602) EVENT_LOG_v1 {"time_micros": 1763931547762597, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.762618) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3211677, prev total WAL file size 3211677, number of live WAL files 2.
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.763444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2056KB)], [33(11MB)]
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547763506, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14541154, "oldest_snapshot_seqno": -1}
Nov 23 15:59:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:07 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4995 keys, 14064059 bytes, temperature: kUnknown
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547885107, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 14064059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14029019, "index_size": 21426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126852, "raw_average_key_size": 25, "raw_value_size": 13936817, "raw_average_value_size": 2790, "num_data_blocks": 881, "num_entries": 4995, "num_filter_entries": 4995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931547, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.885295) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14064059 bytes
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.886498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.5 rd, 115.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.9 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(13.6) write-amplify(6.7) OK, records in: 5519, records dropped: 524 output_compression: NoCompression
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.886513) EVENT_LOG_v1 {"time_micros": 1763931547886505, "job": 18, "event": "compaction_finished", "compaction_time_micros": 121652, "compaction_time_cpu_micros": 28566, "output_level": 6, "num_output_files": 1, "total_output_size": 14064059, "num_input_records": 5519, "num_output_records": 4995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547886890, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931547888547, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.763342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-20:59:07.888582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 15:59:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:08 np0005532762 python3.9[207594]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:08 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:08 np0005532762 python3.9[207746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:09.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:09 np0005532762 python3.9[207824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:59:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:59:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:09 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:59:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:10 np0005532762 python3.9[207977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:10 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:10 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:10 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:10 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:11.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:11 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:12 np0005532762 python3.9[208168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:12 np0005532762 python3.9[208246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:12 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:13.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:59:13 np0005532762 python3.9[208398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:13 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:13 np0005532762 python3.9[208477]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:13.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6aec004250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:14 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:14 np0005532762 python3.9[208629]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:15 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:15 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:15 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:15 np0005532762 systemd[1]: Starting Create netns directory...
Nov 23 15:59:15 np0005532762 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 15:59:15 np0005532762 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 15:59:15 np0005532762 systemd[1]: Finished Create netns directory.
Nov 23 15:59:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:15 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:15.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:16 np0005532762 python3.9[208823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:16 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6af80008d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:17.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:17 np0005532762 python3.9[208975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:17 np0005532762 python3.9[209099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931556.895982-1273-113789735648345/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:17 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:18 np0005532762 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 23 15:59:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:18 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:19.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:19 np0005532762 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 23 15:59:19 np0005532762 python3.9[209253]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 15:59:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:19 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205919 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:59:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:19.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:20 np0005532762 python3.9[209407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:20 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:20 np0005532762 python3.9[209530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931559.7934384-1347-276148035246262/.source.json _original_basename=._w_91cm2 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:21.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:21 np0005532762 python3.9[209708]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:21 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ad8001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:21.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:22 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:23 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:23.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:24 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:24 np0005532762 python3.9[210136]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 23 15:59:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:25 np0005532762 podman[210261]: 2025-11-23 20:59:25.601175048 +0000 UTC m=+0.060544291 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 15:59:25 np0005532762 python3.9[210303]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 15:59:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:25 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ae8004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:26 np0005532762 kernel: ganesha.nfsd[209100]: segfault at 50 ip 00007f6bb4e3432e sp 00007f6b76ffc210 error 4 in libntirpc.so.5.8[7f6bb4e19000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 23 15:59:26 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 15:59:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[195036]: 23/11/2025 20:59:26 : epoch 69237539 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b04004780 fd 38 proxy ignored for local
Nov 23 15:59:26 np0005532762 systemd[1]: Started Process Core Dump (PID 210333/UID 0).
Nov 23 15:59:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:27.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:27 np0005532762 python3.9[210462]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 15:59:27 np0005532762 systemd-coredump[210334]: Process 195040 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 59:#012#0  0x00007f6bb4e3432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 15:59:27 np0005532762 systemd[1]: systemd-coredump@6-210333-0.service: Deactivated successfully.
Nov 23 15:59:27 np0005532762 systemd[1]: systemd-coredump@6-210333-0.service: Consumed 1.035s CPU time.
Nov 23 15:59:27 np0005532762 podman[210515]: 2025-11-23 20:59:27.339621888 +0000 UTC m=+0.026237506 container died 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 15:59:27 np0005532762 systemd[1]: var-lib-containers-storage-overlay-4fff88ddf62e59bbaca93d42aba99bc0cdc0c8fa1af4ad77cb6d0566221c0570-merged.mount: Deactivated successfully.
Nov 23 15:59:27 np0005532762 podman[210515]: 2025-11-23 20:59:27.432467339 +0000 UTC m=+0.119082947 container remove 53986badd315b38d8b9fa281241deaae5f5b036f9383287bb4abe40b27adebd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:27 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 15:59:27 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 15:59:27 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.403s CPU time.
Nov 23 15:59:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:29 np0005532762 python3[210758]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 15:59:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:29.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 15:59:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 15:59:30 np0005532762 podman[210785]: 2025-11-23 20:59:30.170437431 +0000 UTC m=+1.040954953 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:30 np0005532762 podman[210844]: 2025-11-23 20:59:30.309710178 +0000 UTC m=+0.046798316 container create 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 15:59:30 np0005532762 podman[210844]: 2025-11-23 20:59:30.288264194 +0000 UTC m=+0.025352342 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:30 np0005532762 python3[210758]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 23 15:59:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:31 np0005532762 python3.9[211035]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:31 np0005532762 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 15:59:31 np0005532762 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 23 15:59:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 15:59:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:31.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 15:59:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205932 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:59:32 np0005532762 python3.9[211192]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:32 np0005532762 python3.9[211268]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:33.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:33 np0005532762 python3.9[211420]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931572.9666107-1611-145048122540003/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:34 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 15:59:34 np0005532762 python3.9[211521]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:59:34 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:34 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:34 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:34 np0005532762 podman[211523]: 2025-11-23 20:59:34.408324179 +0000 UTC m=+0.081800591 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 15:59:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:35 np0005532762 python3.9[211659]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 15:59:35 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:35 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:35 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:35 np0005532762 systemd[1]: Starting multipathd container...
Nov 23 15:59:35 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:59:35 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:35 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:35 np0005532762 systemd[1]: Started /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 15:59:35 np0005532762 podman[211699]: 2025-11-23 20:59:35.674807949 +0000 UTC m=+0.113999066 container init 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 15:59:35 np0005532762 multipathd[211714]: + sudo -E kolla_set_configs
Nov 23 15:59:35 np0005532762 podman[211699]: 2025-11-23 20:59:35.698172709 +0000 UTC m=+0.137363806 container start 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 15:59:35 np0005532762 podman[211699]: multipathd
Nov 23 15:59:35 np0005532762 systemd[1]: Started multipathd container.
Nov 23 15:59:35 np0005532762 podman[211721]: 2025-11-23 20:59:35.764630953 +0000 UTC m=+0.056694946 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 15:59:35 np0005532762 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 15:59:35 np0005532762 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.service: Failed with result 'exit-code'.
Nov 23 15:59:35 np0005532762 multipathd[211714]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:59:35 np0005532762 multipathd[211714]: INFO:__main__:Validating config file
Nov 23 15:59:35 np0005532762 multipathd[211714]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:59:35 np0005532762 multipathd[211714]: INFO:__main__:Writing out command to execute
Nov 23 15:59:35 np0005532762 multipathd[211714]: ++ cat /run_command
Nov 23 15:59:35 np0005532762 multipathd[211714]: + CMD='/usr/sbin/multipathd -d'
Nov 23 15:59:35 np0005532762 multipathd[211714]: + ARGS=
Nov 23 15:59:35 np0005532762 multipathd[211714]: + sudo kolla_copy_cacerts
Nov 23 15:59:35 np0005532762 multipathd[211714]: + [[ ! -n '' ]]
Nov 23 15:59:35 np0005532762 multipathd[211714]: + . kolla_extend_start
Nov 23 15:59:35 np0005532762 multipathd[211714]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 15:59:35 np0005532762 multipathd[211714]: Running command: '/usr/sbin/multipathd -d'
Nov 23 15:59:35 np0005532762 multipathd[211714]: + umask 0022
Nov 23 15:59:35 np0005532762 multipathd[211714]: + exec /usr/sbin/multipathd -d
Nov 23 15:59:35 np0005532762 multipathd[211714]: 3524.444443 | --------start up--------
Nov 23 15:59:35 np0005532762 multipathd[211714]: 3524.444461 | read /etc/multipath.conf
Nov 23 15:59:35 np0005532762 multipathd[211714]: 3524.449511 | path checkers start up
Nov 23 15:59:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:35.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:37 np0005532762 python3.9[211903]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 15:59:37 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 7.
Nov 23 15:59:37 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:59:37 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.403s CPU time.
Nov 23 15:59:37 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 15:59:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:37.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:38 np0005532762 podman[212105]: 2025-11-23 20:59:38.026079142 +0000 UTC m=+0.042613182 container create 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 15:59:38 np0005532762 python3.9[212077]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 15:59:38 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:38 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:38 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:38 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:38 np0005532762 podman[212105]: 2025-11-23 20:59:38.082911831 +0000 UTC m=+0.099445901 container init 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 23 15:59:38 np0005532762 podman[212105]: 2025-11-23 20:59:38.08964986 +0000 UTC m=+0.106183880 container start 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 23 15:59:38 np0005532762 bash[212105]: 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16
Nov 23 15:59:38 np0005532762 podman[212105]: 2025-11-23 20:59:38.004224592 +0000 UTC m=+0.020758652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 15:59:38 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 15:59:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 15:59:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:39.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:39 np0005532762 python3.9[212326]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:39 np0005532762 systemd[1]: Stopping multipathd container...
Nov 23 15:59:39 np0005532762 multipathd[211714]: 3527.829224 | exit (signal)
Nov 23 15:59:39 np0005532762 multipathd[211714]: 3527.829285 | --------shut down-------
Nov 23 15:59:39 np0005532762 systemd[1]: libpod-8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.scope: Deactivated successfully.
Nov 23 15:59:39 np0005532762 podman[212330]: 2025-11-23 20:59:39.232664515 +0000 UTC m=+0.064137604 container died 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 23 15:59:39 np0005532762 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2c67b0cf0757b69f.timer: Deactivated successfully.
Nov 23 15:59:39 np0005532762 systemd[1]: Stopped /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 15:59:39 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-userdata-shm.mount: Deactivated successfully.
Nov 23 15:59:39 np0005532762 systemd[1]: var-lib-containers-storage-overlay-7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b-merged.mount: Deactivated successfully.
Nov 23 15:59:39 np0005532762 podman[212330]: 2025-11-23 20:59:39.376585175 +0000 UTC m=+0.208058244 container cleanup 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 15:59:39 np0005532762 podman[212330]: multipathd
Nov 23 15:59:39 np0005532762 podman[212357]: multipathd
Nov 23 15:59:39 np0005532762 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 23 15:59:39 np0005532762 systemd[1]: Stopped multipathd container.
Nov 23 15:59:39 np0005532762 systemd[1]: Starting multipathd container...
Nov 23 15:59:39 np0005532762 systemd[1]: Started libcrun container.
Nov 23 15:59:39 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:39 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7726ae459c391d9257c0002f58c608b0e80bc067b5fffa8d7a4f9296ae99102b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 15:59:39 np0005532762 systemd[1]: Started /usr/bin/podman healthcheck run 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755.
Nov 23 15:59:39 np0005532762 podman[212370]: 2025-11-23 20:59:39.56594286 +0000 UTC m=+0.090874543 container init 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 15:59:39 np0005532762 multipathd[212383]: + sudo -E kolla_set_configs
Nov 23 15:59:39 np0005532762 podman[212370]: 2025-11-23 20:59:39.585953061 +0000 UTC m=+0.110884744 container start 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 15:59:39 np0005532762 podman[212370]: multipathd
Nov 23 15:59:39 np0005532762 systemd[1]: Started multipathd container.
Nov 23 15:59:39 np0005532762 multipathd[212383]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 15:59:39 np0005532762 multipathd[212383]: INFO:__main__:Validating config file
Nov 23 15:59:39 np0005532762 multipathd[212383]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 15:59:39 np0005532762 multipathd[212383]: INFO:__main__:Writing out command to execute
Nov 23 15:59:39 np0005532762 multipathd[212383]: ++ cat /run_command
Nov 23 15:59:39 np0005532762 multipathd[212383]: + CMD='/usr/sbin/multipathd -d'
Nov 23 15:59:39 np0005532762 multipathd[212383]: + ARGS=
Nov 23 15:59:39 np0005532762 multipathd[212383]: + sudo kolla_copy_cacerts
Nov 23 15:59:39 np0005532762 multipathd[212383]: Running command: '/usr/sbin/multipathd -d'
Nov 23 15:59:39 np0005532762 multipathd[212383]: + [[ ! -n '' ]]
Nov 23 15:59:39 np0005532762 multipathd[212383]: + . kolla_extend_start
Nov 23 15:59:39 np0005532762 multipathd[212383]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 15:59:39 np0005532762 multipathd[212383]: + umask 0022
Nov 23 15:59:39 np0005532762 multipathd[212383]: + exec /usr/sbin/multipathd -d
Nov 23 15:59:39 np0005532762 podman[212393]: 2025-11-23 20:59:39.657618714 +0000 UTC m=+0.062052769 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 15:59:39 np0005532762 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2fc5d13b8cdc79bd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 15:59:39 np0005532762 systemd[1]: 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755-2fc5d13b8cdc79bd.service: Failed with result 'exit-code'.
Nov 23 15:59:39 np0005532762 multipathd[212383]: 3528.299903 | --------start up--------
Nov 23 15:59:39 np0005532762 multipathd[212383]: 3528.299925 | read /etc/multipath.conf
Nov 23 15:59:39 np0005532762 multipathd[212383]: 3528.304603 | path checkers start up
Nov 23 15:59:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:39.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:40 np0005532762 python3.9[212575]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:41 np0005532762 python3.9[212753]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 15:59:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:42 np0005532762 python3.9[212905]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 23 15:59:42 np0005532762 kernel: Key type psk registered
Nov 23 15:59:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:43 np0005532762 python3.9[213069]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 15:59:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:43.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 15:59:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 15:59:44 np0005532762 python3.9[213192]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763931583.2132385-1851-242402505142477/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:45 np0005532762 python3.9[213344]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:46 np0005532762 python3.9[213497]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:46 np0005532762 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 15:59:46 np0005532762 systemd[1]: Stopped Load Kernel Modules.
Nov 23 15:59:46 np0005532762 systemd[1]: Stopping Load Kernel Modules...
Nov 23 15:59:46 np0005532762 systemd[1]: Starting Load Kernel Modules...
Nov 23 15:59:46 np0005532762 systemd[1]: Finished Load Kernel Modules.
Nov 23 15:59:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:47 np0005532762 python3.9[213653]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 15:59:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:47.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:49.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:49 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:49 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:49 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:50 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:50 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:50 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 15:59:50 np0005532762 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 15:59:50 np0005532762 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 15:59:50 np0005532762 lvm[213779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 15:59:50 np0005532762 lvm[213779]: VG ceph_vg0 finished
Nov 23 15:59:50 np0005532762 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 15:59:50 np0005532762 systemd[1]: Starting man-db-cache-update.service...
Nov 23 15:59:50 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:50 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:50 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:51 np0005532762 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 15:59:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 15:59:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 15:59:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 20:59:51.056 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 15:59:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205951 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 15:59:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:51 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:52 np0005532762 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 15:59:52 np0005532762 systemd[1]: Finished man-db-cache-update.service.
Nov 23 15:59:52 np0005532762 systemd[1]: man-db-cache-update.service: Consumed 1.541s CPU time.
Nov 23 15:59:52 np0005532762 systemd[1]: run-r66adbe3ddcc74f7a92d86be4ad7bc57c.service: Deactivated successfully.
Nov 23 15:59:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:52 np0005532762 python3.9[215123]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 15:59:52 np0005532762 systemd[1]: Stopping Open-iSCSI...
Nov 23 15:59:52 np0005532762 iscsid[203042]: iscsid shutting down.
Nov 23 15:59:52 np0005532762 systemd[1]: iscsid.service: Deactivated successfully.
Nov 23 15:59:52 np0005532762 systemd[1]: Stopped Open-iSCSI.
Nov 23 15:59:52 np0005532762 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 15:59:52 np0005532762 systemd[1]: Starting Open-iSCSI...
Nov 23 15:59:52 np0005532762 systemd[1]: Started Open-iSCSI.
Nov 23 15:59:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:53 np0005532762 python3.9[215277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 15:59:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:53 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:53.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/205954 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 15:59:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:54 np0005532762 python3.9[215434]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 15:59:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:55 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 15:59:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:55.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 15:59:56 np0005532762 podman[215559]: 2025-11-23 20:59:56.018798584 +0000 UTC m=+0.102238444 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 15:59:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:56 np0005532762 python3.9[215603]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 15:59:56 np0005532762 systemd[1]: Reloading.
Nov 23 15:59:56 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 15:59:56 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 15:59:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:57.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:57 np0005532762 python3.9[215794]: ansible-ansible.builtin.service_facts Invoked
Nov 23 15:59:57 np0005532762 network[215812]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 15:59:57 np0005532762 network[215813]: 'network-scripts' will be removed from distribution in near future.
Nov 23 15:59:57 np0005532762 network[215814]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 15:59:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:57 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:57.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 15:59:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:20:59:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 15:59:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 20:59:59 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 15:59:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 15:59:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 15:59:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:20:59:59.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:00 np0005532762 ceph-mon[80135]: overall HEALTH_OK
Nov 23 16:00:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:00:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:01.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:01 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:01.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:03.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:00:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:00:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:03 np0005532762 python3.9[216121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:04 np0005532762 python3.9[216274]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:04 np0005532762 podman[216276]: 2025-11-23 21:00:04.85355712 +0000 UTC m=+0.079891511 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 16:00:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:05.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:05 np0005532762 python3.9[216453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:05 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:05.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:06 np0005532762 python3.9[216607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b40021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:00:06 np0005532762 python3.9[216760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:07.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:07 np0005532762 python3.9[216914]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:07 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:08 np0005532762 python3.9[217067]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:09.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:09 np0005532762 python3.9[217220]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:00:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:09.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:10 np0005532762 podman[217247]: 2025-11-23 21:00:10.63761176 +0000 UTC m=+0.049501715 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:00:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:11.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:11 np0005532762 python3.9[217395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:11 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:11.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:12 np0005532762 python3.9[217547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:12 np0005532762 python3.9[217699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:13 np0005532762 python3.9[217851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210013 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:00:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:13 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:14 np0005532762 python3.9[218004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:15 np0005532762 python3.9[218156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:15 np0005532762 python3.9[218309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:15 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:16.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:16 np0005532762 python3.9[218461]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:17.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:17 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:18.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:18 np0005532762 python3.9[218614]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:18 np0005532762 python3.9[218766]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:19.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:19 np0005532762 python3.9[218918]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:19 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:19 np0005532762 python3.9[219071]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:20 np0005532762 python3.9[219223]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:21.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:21 np0005532762 python3.9[219375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:21 np0005532762 python3.9[219552]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:21 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:22 np0005532762 python3.9[219707]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:00:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:23 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:23 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:24 np0005532762 python3.9[219860]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:25 np0005532762 python3.9[220013]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 16:00:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:25 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:26 np0005532762 podman[220137]: 2025-11-23 21:00:26.489697789 +0000 UTC m=+0.069636029 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:00:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:26 np0005532762 python3.9[220182]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 16:00:26 np0005532762 systemd[1]: Reloading.
Nov 23 16:00:26 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:00:26 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:00:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:27 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:27 np0005532762 python3.9[220374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:28 np0005532762 python3.9[220527]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:28 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:29 np0005532762 python3.9[220680]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:29 np0005532762 python3.9[220834]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:29 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.193485) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630193536, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1000, "num_deletes": 251, "total_data_size": 2266422, "memory_usage": 2313456, "flush_reason": "Manual Compaction"}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 23 16:00:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630206976, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1496311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19849, "largest_seqno": 20844, "table_properties": {"data_size": 1491820, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9799, "raw_average_key_size": 19, "raw_value_size": 1482875, "raw_average_value_size": 2953, "num_data_blocks": 96, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931548, "oldest_key_time": 1763931548, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 13521 microseconds, and 4160 cpu microseconds.
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.207012) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1496311 bytes OK
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.207028) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208268) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208283) EVENT_LOG_v1 {"time_micros": 1763931630208278, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2261523, prev total WAL file size 2261523, number of live WAL files 2.
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1461KB)], [36(13MB)]
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630209010, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15560370, "oldest_snapshot_seqno": -1}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4981 keys, 13378883 bytes, temperature: kUnknown
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630311239, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 13378883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13344529, "index_size": 20804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 127139, "raw_average_key_size": 25, "raw_value_size": 13252980, "raw_average_value_size": 2660, "num_data_blocks": 854, "num_entries": 4981, "num_filter_entries": 4981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.311447) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13378883 bytes
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.322130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(19.3) write-amplify(8.9) OK, records in: 5497, records dropped: 516 output_compression: NoCompression
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.322172) EVENT_LOG_v1 {"time_micros": 1763931630322156, "job": 20, "event": "compaction_finished", "compaction_time_micros": 102291, "compaction_time_cpu_micros": 26155, "output_level": 6, "num_output_files": 1, "total_output_size": 13378883, "num_input_records": 5497, "num_output_records": 4981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630322637, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931630325281, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.208892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:00:30.325341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:00:30 np0005532762 python3.9[220987]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:31 np0005532762 python3.9[221140]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:31.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:31 np0005532762 python3.9[221293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:31 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:32 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:32 np0005532762 python3.9[221447]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 16:00:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:32 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:33.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:33 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:33 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:34.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:34 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:34 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:00:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:00:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:35.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:35 np0005532762 podman[221667]: 2025-11-23 21:00:35.236347426 +0000 UTC m=+0.077575920 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:00:35 np0005532762 python3.9[221739]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:35 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:00:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:36 np0005532762 python3.9[221932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:36 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:36 np0005532762 python3.9[222084]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:36 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:37.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:37 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:37 np0005532762 python3.9[222237]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:38 np0005532762 python3.9[222389]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:38 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:38 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75880032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:39.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:39 np0005532762 python3.9[222541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:39 np0005532762 python3.9[222694]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:39 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:40 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:40 np0005532762 python3.9[222846]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:40 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:41 np0005532762 podman[222995]: 2025-11-23 21:00:41.063609391 +0000 UTC m=+0.061624747 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:00:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:41.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:41 np0005532762 python3.9[223042]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:41 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:41 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:00:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:41 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:41 np0005532762 python3.9[223221]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:42 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:42 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:43 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:44 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:45.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:45 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:46 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:46 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:47.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:47 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:48 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:48 np0005532762 python3.9[223376]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 23 16:00:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:48 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:49 np0005532762 python3.9[223529]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 16:00:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:49 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:50 np0005532762 python3.9[223688]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 16:00:50 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:00:50 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:00:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:50 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.057 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:00:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.057 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:00:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:00:51.058 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:00:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:51 np0005532762 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 16:00:51 np0005532762 systemd-logind[793]: New session 54 of user zuul.
Nov 23 16:00:51 np0005532762 systemd[1]: Started Session 54 of User zuul.
Nov 23 16:00:51 np0005532762 systemd[1]: session-54.scope: Deactivated successfully.
Nov 23 16:00:51 np0005532762 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Nov 23 16:00:51 np0005532762 systemd-logind[793]: Removed session 54.
Nov 23 16:00:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:51 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:52 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:52 np0005532762 python3.9[223878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:53 np0005532762 python3.9[223999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931652.124245-3434-273030129110473/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:53 np0005532762 python3.9[224150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:53 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210053 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:00:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:54 np0005532762 python3.9[224226]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:54 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:54 np0005532762 python3.9[224376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:55 np0005532762 python3.9[224497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931654.4825194-3434-165862229189253/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:55 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:56 np0005532762 python3.9[224648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:56 np0005532762 python3.9[224769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931655.5805774-3434-51066565829997/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:56 np0005532762 podman[224770]: 2025-11-23 21:00:56.650586533 +0000 UTC m=+0.062190073 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 16:00:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:56 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75840016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:57 np0005532762 python3.9[224937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:57 np0005532762 python3.9[225059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931656.7227464-3434-19654306606951/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:57 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:00:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:00:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:00:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:58 np0005532762 python3.9[225209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:00:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:00:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:58 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:00:58 np0005532762 python3.9[225330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931657.9103436-3434-9273580497764/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:00:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:00:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:00:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:00:59.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:00:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:00:59 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75840016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:00 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:01 np0005532762 python3.9[225483]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:01 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:02 np0005532762 python3.9[225676]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:02 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:03 np0005532762 python3.9[225828]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:01:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:03 np0005532762 python3.9[225981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:03 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8002ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:04 np0005532762 python3.9[226104]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763931663.4229288-3755-224818914187681/.source _original_basename=.t2p5yfun follow=False checksum=7c7d8744d02362b5febd08bf84fb657e50088a13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 23 16:01:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:04 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:05 np0005532762 podman[226231]: 2025-11-23 21:01:05.639953073 +0000 UTC m=+0.082472200 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:05 np0005532762 python3.9[226272]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:05 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:01:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:01:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:01:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3928 writes, 21K keys, 3928 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 3928 writes, 3928 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1458 writes, 6861 keys, 1458 commit groups, 1.0 writes per commit group, ingest: 16.43 MB, 0.03 MB/s#012Interval WAL: 1458 writes, 1458 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     60.1      0.55              0.08        10    0.055       0      0       0.0       0.0#012  L6      1/0   12.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     96.4     82.0      1.42              0.30         9    0.158     43K   4824       0.0       0.0#012 Sum      1/0   12.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     69.4     75.9      1.97              0.38        19    0.104     43K   4824       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.5    100.3    100.2      0.65              0.17         8    0.081     22K   2563       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     96.4     82.0      1.42              0.30         9    0.158     43K   4824       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.4      0.55              0.08         9    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.15 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 2.0 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 8.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(482,8.23 MB,2.70677%) FilterBlock(19,130.80 KB,0.0420169%) IndexBlock(19,251.02 KB,0.0806357%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 16:01:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:06 np0005532762 python3.9[226435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:06 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:07 np0005532762 python3.9[226556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931666.1309242-3833-72308968981228/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:01:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:07 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:08 np0005532762 python3.9[226707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 16:01:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:08.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009de0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:08 np0005532762 python3.9[226828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763931667.6287699-3879-104748204440611/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 16:01:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:08 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:01:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:09 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:10.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:10 np0005532762 python3.9[226981]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 23 16:01:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:10 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:11 np0005532762 python3.9[227133]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 16:01:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:11 np0005532762 podman[227172]: 2025-11-23 21:01:11.660069448 +0000 UTC m=+0.071856419 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 16:01:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:11 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:12.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:12 np0005532762 python3[227306]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 16:01:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:12 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:13 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:14.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:14 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210115 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:01:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:15 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:16 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:17 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7590003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:18 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:18 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:19 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:20 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:21 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75a8004c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:22 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:23 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7584003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:24 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7588001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:25.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:25 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:25 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f757c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:26 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:27 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f75b4009ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f757c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:28 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:29 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:01:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:30 np0005532762 kernel: ganesha.nfsd[227414]: segfault at 50 ip 00007f765e4d732e sp 00007f762cff8210 error 4 in libntirpc.so.5.8[7f765e4bc000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 23 16:01:30 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:01:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[212120]: 23/11/2025 21:01:30 : epoch 692375ba : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7594002ad0 fd 39 proxy ignored for local
Nov 23 16:01:30 np0005532762 systemd[1]: Started Process Core Dump (PID 227431/UID 0).
Nov 23 16:01:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:32 np0005532762 systemd-coredump[227432]: Process 212134 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007f765e4d732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:01:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:33 np0005532762 systemd[1]: systemd-coredump@7-227431-0.service: Deactivated successfully.
Nov 23 16:01:33 np0005532762 systemd[1]: systemd-coredump@7-227431-0.service: Consumed 1.354s CPU time.
Nov 23 16:01:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:36 np0005532762 podman[227439]: 2025-11-23 21:01:36.451711506 +0000 UTC m=+2.463528163 container died 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 23 16:01:36 np0005532762 systemd[1]: var-lib-containers-storage-overlay-63ac0bfe19a9b8adea475ad895db306f3a67519182ff8422f85cbb367036cc4d-merged.mount: Deactivated successfully.
Nov 23 16:01:36 np0005532762 podman[227419]: 2025-11-23 21:01:36.523655316 +0000 UTC m=+8.919421606 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 16:01:36 np0005532762 podman[227439]: 2025-11-23 21:01:36.530630801 +0000 UTC m=+2.542447448 container remove 36fdc947acb0f74c6cb2dbe393a95acb88d4327855046b049a9b027d3568eb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:01:36 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:01:36 np0005532762 podman[227320]: 2025-11-23 21:01:36.548272449 +0000 UTC m=+24.314196028 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:36 np0005532762 podman[227457]: 2025-11-23 21:01:36.653617695 +0000 UTC m=+0.130603368 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 16:01:36 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:01:36 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.574s CPU time.
Nov 23 16:01:36 np0005532762 podman[227529]: 2025-11-23 21:01:36.732206641 +0000 UTC m=+0.050745528 container create 3eab058616580740aadc24acbbd43c84853a46eb879fdefff975864a15415e9c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 16:01:36 np0005532762 podman[227529]: 2025-11-23 21:01:36.703247652 +0000 UTC m=+0.021786559 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:36 np0005532762 python3[227306]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 23 16:01:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:37 np0005532762 python3.9[227724]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210138 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:01:38 np0005532762 python3.9[227879]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 23 16:01:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:39 np0005532762 python3.9[228032]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 16:01:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:40.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:40 np0005532762 python3[228184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 16:01:41 np0005532762 podman[228280]: 2025-11-23 21:01:41.129141586 +0000 UTC m=+0.046640168 container create e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 16:01:41 np0005532762 podman[228280]: 2025-11-23 21:01:41.103479325 +0000 UTC m=+0.020977937 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 23 16:01:41 np0005532762 python3[228184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 23 16:01:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:41 np0005532762 podman[228455]: 2025-11-23 21:01:41.953801863 +0000 UTC m=+0.064654057 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:42 np0005532762 python3.9[228534]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:42 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:01:43 np0005532762 python3.9[228688]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:43.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:43 np0005532762 python3.9[228840]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763931703.1955934-4154-164626617065143/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 16:01:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:01:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:01:44 np0005532762 python3.9[228916]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 16:01:44 np0005532762 systemd[1]: Reloading.
Nov 23 16:01:44 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:01:44 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:01:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:45 np0005532762 python3.9[229028]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 16:01:45 np0005532762 systemd[1]: Reloading.
Nov 23 16:01:45 np0005532762 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 16:01:45 np0005532762 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 16:01:45 np0005532762 systemd[1]: Starting nova_compute container...
Nov 23 16:01:45 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:01:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:45 np0005532762 podman[229069]: 2025-11-23 21:01:45.850776263 +0000 UTC m=+0.093993171 container init e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm)
Nov 23 16:01:45 np0005532762 podman[229069]: 2025-11-23 21:01:45.857283376 +0000 UTC m=+0.100500254 container start e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:01:45 np0005532762 podman[229069]: nova_compute
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + sudo -E kolla_set_configs
Nov 23 16:01:45 np0005532762 systemd[1]: Started nova_compute container.
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Validating config file
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying service configuration files
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Deleting /etc/ceph
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Creating directory /etc/ceph
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Writing out command to execute
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:45 np0005532762 nova_compute[229084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:45 np0005532762 nova_compute[229084]: ++ cat /run_command
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + CMD=nova-compute
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + ARGS=
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + sudo kolla_copy_cacerts
Nov 23 16:01:45 np0005532762 nova_compute[229084]: Running command: 'nova-compute'
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + [[ ! -n '' ]]
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + . kolla_extend_start
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + umask 0022
Nov 23 16:01:45 np0005532762 nova_compute[229084]: + exec nova-compute
Nov 23 16:01:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:46 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 8.
Nov 23 16:01:46 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:01:46 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.574s CPU time.
Nov 23 16:01:46 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:01:47 np0005532762 podman[229170]: 2025-11-23 21:01:47.035743465 +0000 UTC m=+0.039246995 container create a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:01:47 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:47 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:47 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:47 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5f9ab2d01d0012c20a9d3e4190fdc56bf8f17f77de53f364aa847543f0855/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:47 np0005532762 podman[229170]: 2025-11-23 21:01:47.101838802 +0000 UTC m=+0.105342352 container init a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:01:47 np0005532762 podman[229170]: 2025-11-23 21:01:47.10705214 +0000 UTC m=+0.110555670 container start a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:01:47 np0005532762 bash[229170]: a20cc2100a0ce143f194bbe51ab7e7ee427f407c69a4b8a256f1b12ed5026683
Nov 23 16:01:47 np0005532762 podman[229170]: 2025-11-23 21:01:47.018601658 +0000 UTC m=+0.022105188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:01:47 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:01:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:47 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:01:47 np0005532762 python3.9[229352]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.445 229088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.446 229088 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 16:01:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.624 229088 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.658 229088 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:48 np0005532762 nova_compute[229084]: 2025-11-23 21:01:48.658 229088 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 16:01:48 np0005532762 python3.9[229530]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.273 229088 INFO nova.virt.driver [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.529 229088 INFO nova.compute.provider_config [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.538 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.539 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.540 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.541 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.542 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.543 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.544 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.545 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.546 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.547 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.548 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.549 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.550 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.551 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.552 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.553 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.554 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.555 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.556 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.557 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.558 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.559 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.560 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.561 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.562 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.563 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.564 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.565 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.566 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.567 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.568 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.569 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.570 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.571 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.572 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.573 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.574 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.575 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.576 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.577 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.578 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.579 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.580 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.581 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.582 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.583 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.584 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.585 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.586 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.587 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.588 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.589 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.590 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.591 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.592 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.593 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.594 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.595 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.596 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.597 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.598 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.599 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.600 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.601 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.602 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.603 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.604 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.605 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.606 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.607 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.608 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.609 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.610 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.611 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.612 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.613 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.614 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.615 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 WARNING oslo_config.cfg [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 16:01:49 np0005532762 nova_compute[229084]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 16:01:49 np0005532762 nova_compute[229084]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 16:01:49 np0005532762 nova_compute[229084]: and ``live_migration_inbound_addr`` respectively.
Nov 23 16:01:49 np0005532762 nova_compute[229084]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.616 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.617 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.618 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.619 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.620 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.621 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.622 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.623 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.624 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.625 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.626 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.627 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.628 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.629 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.630 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.631 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.632 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.633 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.634 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.635 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.636 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.637 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.638 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.639 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.640 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.641 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.642 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.643 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.644 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.645 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.646 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.647 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.648 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.649 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.650 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.651 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.652 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.653 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.654 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.655 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.656 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.657 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.658 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.659 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.660 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.661 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.662 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.663 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.664 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.665 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.666 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.667 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.668 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.669 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.670 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.671 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.672 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.673 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.674 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.675 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.676 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.677 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.678 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.679 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.680 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.681 229088 DEBUG oslo_service.service [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.682 229088 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 16:01:49 np0005532762 python3.9[229682]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.696 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.697 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.697 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.698 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 16:01:49 np0005532762 systemd[1]: Starting libvirt QEMU daemon...
Nov 23 16:01:49 np0005532762 systemd[1]: Started libvirt QEMU daemon.
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.775 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4c6b5afcd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.777 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4c6b5afcd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.777 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.836 229088 WARNING nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 23 16:01:49 np0005532762 nova_compute[229084]: 2025-11-23 21:01:49.836 229088 DEBUG nova.virt.libvirt.volume.mount [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 16:01:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.591 229088 INFO nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <host>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <uuid>dffd854b-01ce-4a28-b7a6-32174dbe320c</uuid>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <arch>x86_64</arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <microcode version='16777317'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <signature family='23' model='49' stepping='0'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='x2apic'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='tsc-deadline'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='osxsave'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='hypervisor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='tsc_adjust'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='spec-ctrl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='stibp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='arch-capabilities'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='cmp_legacy'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='topoext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='virt-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='lbrv'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='tsc-scale'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='vmcb-clean'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='pause-filter'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='pfthreshold'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='rdctl-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='mds-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature name='pschange-mc-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <pages unit='KiB' size='4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <pages unit='KiB' size='2048'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <pages unit='KiB' size='1048576'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <power_management>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <suspend_mem/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </power_management>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <iommu support='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <migration_features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <live/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <uri_transports>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <uri_transport>tcp</uri_transport>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <uri_transport>rdma</uri_transport>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </uri_transports>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </migration_features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <topology>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <cells num='1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <cell id='0'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <memory unit='KiB'>7864320</memory>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <distances>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <sibling id='0' value='10'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          </distances>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          <cpus num='8'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:          </cpus>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        </cell>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </cells>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </topology>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <cache>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </cache>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <secmodel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model>selinux</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <doi>0</doi>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </secmodel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <secmodel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model>dac</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <doi>0</doi>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </secmodel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </host>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <guest>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <os_type>hvm</os_type>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <arch name='i686'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <wordsize>32</wordsize>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <domain type='qemu'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <domain type='kvm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <pae/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <nonpae/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <apic default='on' toggle='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <cpuselection/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <deviceboot/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <externalSnapshot/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </guest>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <guest>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <os_type>hvm</os_type>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <arch name='x86_64'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <wordsize>64</wordsize>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <domain type='qemu'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <domain type='kvm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <apic default='on' toggle='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <cpuselection/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <deviceboot/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <externalSnapshot/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </guest>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 
Nov 23 16:01:50 np0005532762 nova_compute[229084]: </capabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: #033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.597 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.615 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 16:01:50 np0005532762 nova_compute[229084]: <domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <domain>kvm</domain>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <arch>i686</arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <vcpu max='4096'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <iothreads supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <os supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='firmware'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <loader supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>rom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pflash</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='readonly'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>yes</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='secure'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </loader>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </os>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='maximumMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='succor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='custom' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-128'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-256'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-512'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <memoryBacking supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='sourceType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>anonymous</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>memfd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </memoryBacking>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <disk supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='diskDevice'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>disk</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cdrom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>floppy</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>lun</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>fdc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>sata</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </disk>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <graphics supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vnc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egl-headless</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </graphics>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <video supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='modelType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vga</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cirrus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>none</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>bochs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ramfb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </video>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hostdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='mode'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>subsystem</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='startupPolicy'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>mandatory</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>requisite</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>optional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='subsysType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pci</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='capsType'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='pciBackend'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hostdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <rng supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>random</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </rng>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <filesystem supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='driverType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>path</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>handle</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtiofs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </filesystem>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <tpm supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-tis</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-crb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emulator</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>external</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendVersion'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>2.0</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </tpm>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <redirdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </redirdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <channel supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </channel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <crypto supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </crypto>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <interface supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>passt</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </interface>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <panic supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>isa</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>hyperv</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </panic>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <console supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>null</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dev</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pipe</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stdio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>udp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tcp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu-vdagent</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </console>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <gic supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <genid supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backup supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <async-teardown supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <ps2 supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sev supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sgx supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hyperv supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='features'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>relaxed</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vapic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>spinlocks</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vpindex</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>runtime</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>synic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stimer</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reset</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vendor_id</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>frequencies</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reenlightenment</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tlbflush</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ipi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>avic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emsr_bitmap</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>xmm_input</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hyperv>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <launchSecurity supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='sectype'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tdx</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </launchSecurity>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: </domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.620 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 16:01:50 np0005532762 nova_compute[229084]: <domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <domain>kvm</domain>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <arch>i686</arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <vcpu max='240'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <iothreads supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <os supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='firmware'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <loader supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>rom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pflash</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='readonly'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>yes</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='secure'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </loader>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </os>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='maximumMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='succor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='custom' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-128'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-256'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-512'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <memoryBacking supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='sourceType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>anonymous</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>memfd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </memoryBacking>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <disk supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='diskDevice'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>disk</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cdrom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>floppy</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>lun</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ide</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>fdc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>sata</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </disk>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <graphics supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vnc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egl-headless</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </graphics>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <video supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='modelType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vga</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cirrus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>none</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>bochs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ramfb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </video>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hostdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='mode'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>subsystem</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='startupPolicy'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>mandatory</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>requisite</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>optional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='subsysType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pci</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='capsType'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='pciBackend'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hostdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <rng supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>random</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </rng>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <filesystem supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='driverType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>path</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>handle</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtiofs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </filesystem>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <tpm supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-tis</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-crb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emulator</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>external</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendVersion'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>2.0</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </tpm>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <redirdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </redirdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <channel supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </channel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <crypto supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </crypto>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <interface supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>passt</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </interface>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <panic supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>isa</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>hyperv</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </panic>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <console supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>null</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dev</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pipe</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stdio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>udp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tcp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu-vdagent</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </console>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <gic supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <genid supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backup supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <async-teardown supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <ps2 supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sev supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sgx supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hyperv supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='features'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>relaxed</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vapic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>spinlocks</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vpindex</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>runtime</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>synic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stimer</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reset</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vendor_id</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>frequencies</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reenlightenment</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tlbflush</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ipi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>avic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emsr_bitmap</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>xmm_input</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hyperv>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <launchSecurity supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='sectype'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tdx</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </launchSecurity>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: </domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.650 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.653 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 16:01:50 np0005532762 nova_compute[229084]: <domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <domain>kvm</domain>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <arch>x86_64</arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <vcpu max='4096'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <iothreads supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <os supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='firmware'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>efi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <loader supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>rom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pflash</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='readonly'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>yes</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='secure'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>yes</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </loader>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </os>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='maximumMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='succor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='custom' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-128'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-256'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-512'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <memoryBacking supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='sourceType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>anonymous</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>memfd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </memoryBacking>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <disk supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='diskDevice'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>disk</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cdrom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>floppy</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>lun</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>fdc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>sata</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </disk>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <graphics supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vnc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egl-headless</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </graphics>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <video supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='modelType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vga</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cirrus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>none</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>bochs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ramfb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </video>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hostdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='mode'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>subsystem</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='startupPolicy'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>mandatory</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>requisite</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>optional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='subsysType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pci</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='capsType'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='pciBackend'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hostdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <rng supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>random</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </rng>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <filesystem supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='driverType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>path</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>handle</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtiofs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </filesystem>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <tpm supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-tis</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-crb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emulator</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>external</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendVersion'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>2.0</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </tpm>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <redirdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </redirdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <channel supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </channel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <crypto supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </crypto>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <interface supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>passt</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </interface>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <panic supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>isa</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>hyperv</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </panic>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <console supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>null</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dev</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pipe</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stdio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>udp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tcp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu-vdagent</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </console>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <gic supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <genid supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backup supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <async-teardown supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <ps2 supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sev supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sgx supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hyperv supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='features'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>relaxed</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vapic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>spinlocks</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vpindex</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>runtime</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>synic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stimer</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reset</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vendor_id</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>frequencies</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reenlightenment</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tlbflush</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ipi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>avic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emsr_bitmap</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>xmm_input</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hyperv>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <launchSecurity supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='sectype'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tdx</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </launchSecurity>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: </domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.719 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 16:01:50 np0005532762 nova_compute[229084]: <domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <domain>kvm</domain>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <arch>x86_64</arch>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <vcpu max='240'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <iothreads supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <os supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='firmware'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <loader supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>rom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pflash</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='readonly'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>yes</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='secure'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>no</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </loader>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </os>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='maximumMigratable'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>on</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>off</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <vendor>AMD</vendor>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='succor'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <mode name='custom' supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Denverton-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='auto-ibrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amd-psfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='stibp-always-on'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='EPYC-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-128'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-256'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx10-512'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='prefetchiti'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Haswell-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512er'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512pf'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fma4'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tbm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xop'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='amx-tile'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-bf16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-fp16'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bitalg'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrc'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fzrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='la57'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='taa-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xfd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ifma'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cmpccxadd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fbsdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='fsrs'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ibrs-all'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mcdt-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pbrsb-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='psdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='serialize'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vaes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='hle'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='rtm'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512bw'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512cd'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512dq'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512f'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='avx512vl'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='invpcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pcid'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='pku'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='mpx'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='core-capability'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='split-lock-detect'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='cldemote'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='erms'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='gfni'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdir64b'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='movdiri'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='xsaves'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='athlon-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='core2duo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='coreduo-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='n270-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='ss'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <blockers model='phenom-v1'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnow'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <feature name='3dnowext'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </blockers>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </mode>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </cpu>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <memoryBacking supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <enum name='sourceType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>anonymous</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <value>memfd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </memoryBacking>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <disk supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='diskDevice'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>disk</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cdrom</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>floppy</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>lun</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ide</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>fdc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>sata</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </disk>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <graphics supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vnc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egl-headless</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </graphics>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <video supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='modelType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vga</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>cirrus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>none</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>bochs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ramfb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </video>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hostdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='mode'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>subsystem</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='startupPolicy'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>mandatory</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>requisite</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>optional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='subsysType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pci</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>scsi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='capsType'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='pciBackend'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hostdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <rng supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtio-non-transitional</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>random</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>egd</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </rng>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <filesystem supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='driverType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>path</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>handle</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>virtiofs</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </filesystem>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <tpm supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-tis</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tpm-crb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emulator</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>external</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendVersion'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>2.0</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </tpm>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <redirdev supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='bus'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>usb</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </redirdev>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <channel supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </channel>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <crypto supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendModel'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>builtin</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </crypto>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <interface supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='backendType'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>default</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>passt</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </interface>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <panic supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='model'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>isa</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>hyperv</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </panic>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <console supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='type'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>null</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vc</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pty</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dev</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>file</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>pipe</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stdio</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>udp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tcp</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>unix</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>qemu-vdagent</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>dbus</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </console>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </devices>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  <features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <gic supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <genid supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <backup supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <async-teardown supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <ps2 supported='yes'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sev supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <sgx supported='no'/>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <hyperv supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='features'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>relaxed</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vapic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>spinlocks</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vpindex</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>runtime</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>synic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>stimer</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reset</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>vendor_id</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>frequencies</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>reenlightenment</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tlbflush</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>ipi</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>avic</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>emsr_bitmap</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>xmm_input</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </defaults>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </hyperv>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    <launchSecurity supported='yes'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      <enum name='sectype'>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:        <value>tdx</value>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:      </enum>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:    </launchSecurity>
Nov 23 16:01:50 np0005532762 nova_compute[229084]:  </features>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: </domainCapabilities>
Nov 23 16:01:50 np0005532762 nova_compute[229084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.783 229088 DEBUG nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.783 229088 INFO nova.virt.libvirt.host [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Secure Boot support detected#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.786 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.786 229088 INFO nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.802 229088 DEBUG nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.848 229088 INFO nova.virt.node [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Determined node identity bb217351-d4c8-44a4-9137-08393a1f72bc from /var/lib/nova/compute_id#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.871 229088 WARNING nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Compute nodes ['bb217351-d4c8-44a4-9137-08393a1f72bc'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.906 229088 INFO nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.940 229088 WARNING nova.compute.manager [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.941 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.942 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:01:50 np0005532762 nova_compute[229084]: 2025-11-23 21:01:50.942 229088 DEBUG oslo_concurrency.processutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:50 np0005532762 python3.9[229895]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 16:01:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.058 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.059 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:01:51.059 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:01:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:51.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:01:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:01:51 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1784443819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:01:51 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.389 229088 DEBUG oslo_concurrency.processutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:51 np0005532762 systemd[1]: Starting libvirt nodedev daemon...
Nov 23 16:01:51 np0005532762 systemd[1]: Started libvirt nodedev daemon.
Nov 23 16:01:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.667 229088 WARNING nova.virt.libvirt.driver [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.668 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5293MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.669 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.669 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.684 229088 WARNING nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] No compute node record for compute-1.ctlplane.example.com:bb217351-d4c8-44a4-9137-08393a1f72bc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host bb217351-d4c8-44a4-9137-08393a1f72bc could not be found.#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.705 229088 INFO nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: bb217351-d4c8-44a4-9137-08393a1f72bc#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.773 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:01:51 np0005532762 nova_compute[229084]: 2025-11-23 21:01:51.774 229088 DEBUG nova.compute.resource_tracker [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:01:52 np0005532762 python3.9[230120]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 16:01:52 np0005532762 systemd[1]: Stopping nova_compute container...
Nov 23 16:01:52 np0005532762 nova_compute[229084]: 2025-11-23 21:01:52.126 229088 DEBUG oslo_concurrency.lockutils [None req-4f570c4d-d029-407f-b9a0-f4c5640aaeda - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:01:52 np0005532762 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:52 np0005532762 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:52 np0005532762 nova_compute[229084]: 2025-11-23 21:01:52.127 229088 DEBUG oslo_concurrency.lockutils [None req-33eeb46a-6354-40c0-942d-41a37d734650 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:52 np0005532762 virtqemud[229705]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 23 16:01:52 np0005532762 virtqemud[229705]: hostname: compute-1
Nov 23 16:01:52 np0005532762 virtqemud[229705]: End of file while reading data: Input/output error
Nov 23 16:01:52 np0005532762 systemd[1]: libpod-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df.scope: Deactivated successfully.
Nov 23 16:01:52 np0005532762 systemd[1]: libpod-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df.scope: Consumed 3.644s CPU time.
Nov 23 16:01:52 np0005532762 podman[230124]: 2025-11-23 21:01:52.543694819 +0000 UTC m=+0.451103010 container died e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:01:52 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df-userdata-shm.mount: Deactivated successfully.
Nov 23 16:01:52 np0005532762 systemd[1]: var-lib-containers-storage-overlay-7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc-merged.mount: Deactivated successfully.
Nov 23 16:01:52 np0005532762 podman[230124]: 2025-11-23 21:01:52.81850137 +0000 UTC m=+0.725909551 container cleanup e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 16:01:52 np0005532762 podman[230124]: nova_compute
Nov 23 16:01:52 np0005532762 podman[230154]: nova_compute
Nov 23 16:01:52 np0005532762 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 23 16:01:52 np0005532762 systemd[1]: Stopped nova_compute container.
Nov 23 16:01:52 np0005532762 systemd[1]: Starting nova_compute container...
Nov 23 16:01:53 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:01:53 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:53 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:53 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:53 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:53 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f9a839ebcc2f2be232b73ad095168158bbf494a95d293329848704129b6cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 16:01:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:53.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:53 np0005532762 podman[230167]: 2025-11-23 21:01:53.337540746 +0000 UTC m=+0.417232989 container init e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:01:53 np0005532762 podman[230167]: 2025-11-23 21:01:53.343120595 +0000 UTC m=+0.422812808 container start e57bd1d81dfaddb9e853d32adec041960bea7d0beff0b4ed65acc6be6ec8e0df (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + sudo -E kolla_set_configs
Nov 23 16:01:53 np0005532762 podman[230167]: nova_compute
Nov 23 16:01:53 np0005532762 systemd[1]: Started nova_compute container.
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Validating config file
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying service configuration files
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /etc/ceph
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Creating directory /etc/ceph
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Writing out command to execute
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 16:01:53 np0005532762 nova_compute[230183]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 16:01:53 np0005532762 nova_compute[230183]: ++ cat /run_command
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + CMD=nova-compute
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + ARGS=
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + sudo kolla_copy_cacerts
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + [[ ! -n '' ]]
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + . kolla_extend_start
Nov 23 16:01:53 np0005532762 nova_compute[230183]: Running command: 'nova-compute'
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + umask 0022
Nov 23 16:01:53 np0005532762 nova_compute[230183]: + exec nova-compute
Nov 23 16:01:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:53 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:01:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[229185]: 23/11/2025 21:01:53 : epoch 6923763b : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:01:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:01:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:01:55.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.271 230187 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.272 230187 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.396 230187 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.417 230187 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.418 230187 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.869 230187 INFO nova.virt.driver [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.968 230187 INFO nova.compute.provider_config [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.974 230187 DEBUG oslo_concurrency.lockutils [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.975 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.976 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.977 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.978 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.979 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.980 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.981 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.982 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.983 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.984 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.985 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.986 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.987 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.988 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.989 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.990 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.991 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.992 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.993 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.994 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.995 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.996 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.997 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.998 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:55 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:55.999 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.000 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.001 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.002 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.003 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.004 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.005 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.006 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.007 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.008 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.009 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.010 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.011 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.012 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.013 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.014 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.015 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.016 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.017 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.018 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.019 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.020 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.021 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.022 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.023 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.024 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.025 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.026 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.027 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.028 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.029 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.030 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.031 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.032 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.033 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.034 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.035 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.036 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.037 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.038 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.039 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.040 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.041 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.042 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.043 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.044 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.045 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.046 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.047 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.048 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.049 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.050 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 WARNING oslo_config.cfg [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 16:01:56 np0005532762 nova_compute[230183]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 16:01:56 np0005532762 nova_compute[230183]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 16:01:56 np0005532762 nova_compute[230183]: and ``live_migration_inbound_addr`` respectively.
Nov 23 16:01:56 np0005532762 nova_compute[230183]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.051 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.052 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.053 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_secret_uuid        = 03808be8-ae4a-5548-82e6-4a294f1bc627 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.054 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.055 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.056 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.057 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.058 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.059 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.060 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.061 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.062 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.063 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.064 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.065 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.066 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.067 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.068 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.069 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.070 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.071 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.072 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.073 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.074 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.075 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.076 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.077 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.078 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.079 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.080 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.081 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.082 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.083 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.084 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.085 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.086 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.087 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.088 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.089 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.090 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.091 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.092 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.093 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.094 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.095 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.096 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.097 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.098 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.099 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.100 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.101 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.102 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.103 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.104 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.105 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.106 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.107 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.108 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.109 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.110 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.111 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.112 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.113 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.114 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.115 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.116 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.117 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.117 230187 DEBUG oslo_service.service [None req-f5f50915-3c29-4909-9d1e-c33351499e69 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.118 230187 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.132 230187 INFO nova.virt.node [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Determined node identity bb217351-d4c8-44a4-9137-08393a1f72bc from /var/lib/nova/compute_id#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.133 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.134 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.146 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb4f07e6520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.150 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb4f07e6520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.151 230187 INFO nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.159 230187 INFO nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <host>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <uuid>dffd854b-01ce-4a28-b7a6-32174dbe320c</uuid>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <arch>x86_64</arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model>EPYC-Rome-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <vendor>AMD</vendor>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <microcode version='16777317'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <signature family='23' model='49' stepping='0'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='x2apic'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='tsc-deadline'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='osxsave'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='hypervisor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='tsc_adjust'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='spec-ctrl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='stibp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='arch-capabilities'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='cmp_legacy'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='topoext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='virt-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='lbrv'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='tsc-scale'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='vmcb-clean'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='pause-filter'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='pfthreshold'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='svme-addr-chk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='rdctl-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='mds-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature name='pschange-mc-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <pages unit='KiB' size='4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <pages unit='KiB' size='2048'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <pages unit='KiB' size='1048576'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <power_management>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <suspend_mem/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </power_management>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <iommu support='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <migration_features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <live/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <uri_transports>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <uri_transport>tcp</uri_transport>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <uri_transport>rdma</uri_transport>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </uri_transports>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </migration_features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <topology>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <cells num='1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <cell id='0'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <memory unit='KiB'>7864320</memory>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <distances>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <sibling id='0' value='10'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          </distances>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          <cpus num='8'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:          </cpus>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        </cell>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </cells>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </topology>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <cache>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </cache>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <secmodel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model>selinux</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <doi>0</doi>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </secmodel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <secmodel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model>dac</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <doi>0</doi>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </secmodel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </host>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <guest>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <os_type>hvm</os_type>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <arch name='i686'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <wordsize>32</wordsize>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <domain type='qemu'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <domain type='kvm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <pae/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <nonpae/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <apic default='on' toggle='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <cpuselection/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <deviceboot/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <externalSnapshot/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </guest>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <guest>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <os_type>hvm</os_type>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <arch name='x86_64'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <wordsize>64</wordsize>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <domain type='qemu'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <domain type='kvm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <acpi default='on' toggle='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <apic default='on' toggle='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <cpuselection/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <deviceboot/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <disksnapshot default='on' toggle='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <externalSnapshot/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </guest>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 
Nov 23 16:01:56 np0005532762 nova_compute[230183]: </capabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: #033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.161 230187 DEBUG nova.virt.libvirt.volume.mount [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 16:01:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:01:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:01:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:01:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.168 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.171 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 16:01:56 np0005532762 nova_compute[230183]: <domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <domain>kvm</domain>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <arch>i686</arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <vcpu max='4096'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <iothreads supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <os supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='firmware'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <loader supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>rom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pflash</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='readonly'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>yes</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='secure'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </loader>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='maximumMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <vendor>AMD</vendor>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='succor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='custom' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-128'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-256'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-512'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <memoryBacking supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='sourceType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>anonymous</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>memfd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </memoryBacking>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <disk supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='diskDevice'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>disk</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cdrom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>floppy</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>lun</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>fdc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>sata</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <graphics supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vnc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egl-headless</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <video supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='modelType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vga</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cirrus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>none</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>bochs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ramfb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hostdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='mode'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>subsystem</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='startupPolicy'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>mandatory</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>requisite</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>optional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='subsysType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pci</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='capsType'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='pciBackend'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hostdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <rng supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>random</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <filesystem supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='driverType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>path</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>handle</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtiofs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </filesystem>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <tpm supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-tis</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-crb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emulator</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>external</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendVersion'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>2.0</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </tpm>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <redirdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </redirdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <channel supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </channel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <crypto supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </crypto>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <interface supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>passt</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <panic supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>isa</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>hyperv</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </panic>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <console supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>null</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dev</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pipe</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stdio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>udp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tcp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu-vdagent</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <gic supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <genid supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backup supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <async-teardown supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <ps2 supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sev supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sgx supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hyperv supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='features'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>relaxed</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vapic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>spinlocks</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vpindex</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>runtime</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>synic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stimer</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reset</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vendor_id</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>frequencies</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reenlightenment</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tlbflush</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ipi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>avic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emsr_bitmap</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>xmm_input</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hyperv>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <launchSecurity supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='sectype'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tdx</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </launchSecurity>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: </domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.178 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 16:01:56 np0005532762 nova_compute[230183]: <domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <domain>kvm</domain>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <arch>i686</arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <vcpu max='240'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <iothreads supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <os supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='firmware'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <loader supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>rom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pflash</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='readonly'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>yes</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='secure'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </loader>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='maximumMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <vendor>AMD</vendor>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='succor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='custom' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-128'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-256'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-512'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <memoryBacking supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='sourceType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>anonymous</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>memfd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </memoryBacking>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <disk supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='diskDevice'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>disk</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cdrom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>floppy</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>lun</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ide</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>fdc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>sata</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <graphics supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vnc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egl-headless</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <video supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='modelType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vga</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cirrus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>none</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>bochs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ramfb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hostdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='mode'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>subsystem</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='startupPolicy'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>mandatory</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>requisite</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>optional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='subsysType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pci</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='capsType'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='pciBackend'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hostdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <rng supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>random</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <filesystem supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='driverType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>path</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>handle</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtiofs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </filesystem>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <tpm supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-tis</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-crb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emulator</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>external</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendVersion'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>2.0</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </tpm>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <redirdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </redirdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <channel supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </channel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <crypto supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </crypto>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <interface supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>passt</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <panic supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>isa</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>hyperv</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </panic>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <console supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>null</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dev</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pipe</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stdio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>udp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tcp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu-vdagent</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <gic supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <genid supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backup supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <async-teardown supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <ps2 supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sev supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sgx supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hyperv supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='features'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>relaxed</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vapic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>spinlocks</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vpindex</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>runtime</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>synic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stimer</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reset</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vendor_id</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>frequencies</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reenlightenment</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tlbflush</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ipi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>avic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emsr_bitmap</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>xmm_input</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hyperv>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <launchSecurity supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='sectype'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tdx</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </launchSecurity>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: </domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.208 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.212 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 16:01:56 np0005532762 nova_compute[230183]: <domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <domain>kvm</domain>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <arch>x86_64</arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <vcpu max='4096'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <iothreads supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <os supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='firmware'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>efi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <loader supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>rom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pflash</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='readonly'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>yes</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='secure'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>yes</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </loader>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='maximumMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <vendor>AMD</vendor>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='succor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='custom' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-128'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-256'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-512'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SierraForest-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-ne-convert'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cmpccxadd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Client-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Skylake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='core-capability'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='split-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Snowridge-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='athlon-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='core2duo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='coreduo-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='n270-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='phenom-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnow'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='3dnowext'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <memoryBacking supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='sourceType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>anonymous</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>memfd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </memoryBacking>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <disk supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='diskDevice'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>disk</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cdrom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>floppy</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>lun</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>fdc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>sata</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <graphics supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vnc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egl-headless</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <video supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='modelType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vga</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>cirrus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>none</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>bochs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ramfb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hostdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='mode'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>subsystem</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='startupPolicy'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>mandatory</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>requisite</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>optional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='subsysType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pci</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>scsi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='capsType'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='pciBackend'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hostdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <rng supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtio-non-transitional</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>random</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>egd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <filesystem supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='driverType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>path</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>handle</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>virtiofs</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </filesystem>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <tpm supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-tis</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tpm-crb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emulator</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>external</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendVersion'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>2.0</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </tpm>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <redirdev supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='bus'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>usb</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </redirdev>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <channel supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </channel>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <crypto supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendModel'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>builtin</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </crypto>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <interface supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='backendType'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>default</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>passt</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <panic supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='model'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>isa</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>hyperv</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </panic>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <console supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>null</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vc</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pty</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dev</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>file</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pipe</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stdio</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>udp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tcp</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>unix</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>qemu-vdagent</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>dbus</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <gic supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <vmcoreinfo supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <genid supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backingStoreInput supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <backup supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <async-teardown supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <ps2 supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sev supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <sgx supported='no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <hyperv supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='features'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>relaxed</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vapic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>spinlocks</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vpindex</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>runtime</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>synic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>stimer</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reset</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>vendor_id</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>frequencies</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>reenlightenment</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tlbflush</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>ipi</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>avic</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>emsr_bitmap</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>xmm_input</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <spinlocks>4095</spinlocks>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <stimer_direct>on</stimer_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_direct>on</tlbflush_direct>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <tlbflush_extended>on</tlbflush_extended>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </defaults>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </hyperv>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <launchSecurity supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='sectype'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>tdx</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </launchSecurity>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: </domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 16:01:56 np0005532762 nova_compute[230183]: 2025-11-23 21:01:56.277 230187 DEBUG nova.virt.libvirt.host [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 16:01:56 np0005532762 nova_compute[230183]: <domainCapabilities>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <domain>kvm</domain>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <arch>x86_64</arch>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <vcpu max='240'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <iothreads supported='yes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <os supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <enum name='firmware'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <loader supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='type'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>rom</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>pflash</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='readonly'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>yes</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='secure'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>no</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </loader>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:  <cpu>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-passthrough' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='hostPassthroughMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='maximum' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <enum name='maximumMigratable'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>on</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <value>off</value>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </enum>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='host-model' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <vendor>AMD</vendor>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='x2apic'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='hypervisor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='stibp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='overflow-recov'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='succor'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lbrv'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='tsc-scale'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='flushbyasid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pause-filter'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='pfthreshold'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <feature policy='disable' name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    </mode>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:    <mode name='custom' supported='yes'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Broadwell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Cooperlake-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mpx'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Denverton-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Dhyana-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='auto-ibrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Milan-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amd-psfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='no-nested-data-bp'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='null-sel-clr-base'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='stibp-always-on'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-Rome-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='EPYC-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='GraniteRapids-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-128'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-256'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx10-512'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='bus-lock-detect'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='cldemote'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fbsdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrc'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrs'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fzrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='mcdt-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdir64b'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='movdiri'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pbrsb-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='prefetchiti'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='psdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='sbdr-ssdp-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='serialize'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tsx-ldtrk'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xfd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Haswell-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v3'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v6'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Icelake-Server-v7'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vl'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fsrm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='gfni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='hle'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ibrs-all'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='invpcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='la57'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pcid'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='pku'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='rtm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='taa-no'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vaes'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='vpclmulqdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xsaves'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-IBRS'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='IvyBridge-v2'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='KnightsMill-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4fmaps'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-4vnniw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512er'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512pf'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='erms'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='ss'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G4-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='Opteron_G5-v1'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='fma4'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='tbm'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='xop'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      </blockers>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:      <blockers model='SapphireRapids'>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-int8'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='amx-tile'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx-vnni'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-bf16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-fp16'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512-vpopcntdq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bitalg'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512bw'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512cd'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512dq'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512f'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512ifma'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi'/>
Nov 23 16:01:56 np0005532762 nova_compute[230183]:        <feature name='avx512vbmi2'/>
Nov 23 16:03:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:08 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 23 16:03:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1952705122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 23 16:03:09 np0005532762 rsyslogd[1004]: imjournal: 1295 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 23 16:03:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:09.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210310 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:03:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d600021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:10 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:11.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d600021c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:12 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:13.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:14 np0005532762 podman[231177]: 2025-11-23 21:03:14.648770851 +0000 UTC m=+0.056506545 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 23 16:03:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:14 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:15.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:16.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d700091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:16 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:17.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d700091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:18 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:03:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:19.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:20 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:21.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:21 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:03:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:21 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:03:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:22 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:24 : epoch 6923767c : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:03:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:26 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:28 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210330 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:03:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:30 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:32 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:34 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d60002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:36 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:03:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:03:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:38 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:39 np0005532762 podman[231237]: 2025-11-23 21:03:39.663631442 +0000 UTC m=+0.081338314 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:03:39 np0005532762 podman[231238]: 2025-11-23 21:03:39.674807969 +0000 UTC m=+0.085504215 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:03:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:40 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:41.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:42 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:44 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:45 np0005532762 podman[231308]: 2025-11-23 21:03:45.672680866 +0000 UTC m=+0.088159527 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 16:03:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:46 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:48 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:03:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:03:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:50 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:03:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:03:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:03:51.060 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:03:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:52.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:52 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d70009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:54.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:54 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:03:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:03:55 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:03:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210356 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:03:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d6c003520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:56.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.751 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.751 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.767 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.768 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.768 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.788 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:03:56 np0005532762 nova_compute[230183]: 2025-11-23 21:03:56.789 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:03:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:56 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d7000a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:03:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1602560503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.257 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:03:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:03:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.398 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5265MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.454 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.454 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.475 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:03:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:03:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3423069219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.941 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.946 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.962 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.964 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:03:57 np0005532762 nova_compute[230183]: 2025-11-23 21:03:57.965 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:03:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:03:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[230968]: 23/11/2025 21:03:58 : epoch 6923767c : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4d4c003c10 fd 39 proxy ignored for local
Nov 23 16:03:58 np0005532762 kernel: ganesha.nfsd[231128]: segfault at 50 ip 00007f4e1f52732e sp 00007f4dd77fd210 error 4 in libntirpc.so.5.8[7f4e1f50c000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 23 16:03:58 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:03:58 np0005532762 systemd[1]: Started Process Core Dump (PID 231459/UID 0).
Nov 23 16:03:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:03:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:03:58.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:03:58 np0005532762 nova_compute[230183]: 2025-11-23 21:03:58.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:03:58 np0005532762 nova_compute[230183]: 2025-11-23 21:03:58.626 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:03:58 np0005532762 nova_compute[230183]: 2025-11-23 21:03:58.626 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:03:58 np0005532762 nova_compute[230183]: 2025-11-23 21:03:58.642 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:03:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:03:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:03:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:03:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:03:59 np0005532762 systemd-coredump[231460]: Process 230972 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f4e1f52732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:03:59 np0005532762 systemd[1]: systemd-coredump@9-231459-0.service: Deactivated successfully.
Nov 23 16:03:59 np0005532762 systemd[1]: systemd-coredump@9-231459-0.service: Consumed 1.140s CPU time.
Nov 23 16:03:59 np0005532762 podman[231466]: 2025-11-23 21:03:59.63584353 +0000 UTC m=+0.030406390 container died 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:03:59 np0005532762 systemd[1]: var-lib-containers-storage-overlay-a83b7eb3f5afeb4b96d0cc4d64fcb7e4fecc6be41db13f731cf53b24c499dfe1-merged.mount: Deactivated successfully.
Nov 23 16:03:59 np0005532762 podman[231466]: 2025-11-23 21:03:59.691016257 +0000 UTC m=+0.085579067 container remove 5f2bb77731781714ca2bb677abc38580fbcf9262dee8e9de7400628ca8495195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:03:59 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:03:59 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:03:59 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.425s CPU time.
Nov 23 16:04:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:04:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:04:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:01 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:01 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:04:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:02.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210404 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:04:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:04:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:04:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:04:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:04:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:04:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968166939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:04:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.066 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:04:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.067 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:04:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:09.069 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:04:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:09.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:10 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 10.
Nov 23 16:04:10 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:04:10 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.425s CPU time.
Nov 23 16:04:10 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:04:10 np0005532762 podman[231566]: 2025-11-23 21:04:10.14180205 +0000 UTC m=+0.051845757 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 23 16:04:10 np0005532762 podman[231564]: 2025-11-23 21:04:10.170686438 +0000 UTC m=+0.083150341 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:04:10 np0005532762 podman[231649]: 2025-11-23 21:04:10.304703659 +0000 UTC m=+0.066924064 container create 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 23 16:04:10 np0005532762 podman[231649]: 2025-11-23 21:04:10.259067399 +0000 UTC m=+0.021287794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:04:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:10 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:10 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:10 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:10 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:04:10 np0005532762 podman[231649]: 2025-11-23 21:04:10.554428107 +0000 UTC m=+0.316648522 container init 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 23 16:04:10 np0005532762 podman[231649]: 2025-11-23 21:04:10.559303328 +0000 UTC m=+0.321523723 container start 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:04:10 np0005532762 bash[231649]: 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c
Nov 23 16:04:10 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:04:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:11.937519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931851937580, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2382, "num_deletes": 251, "total_data_size": 6373654, "memory_usage": 6460352, "flush_reason": "Manual Compaction"}
Nov 23 16:04:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852064315, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4132392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20850, "largest_seqno": 23226, "table_properties": {"data_size": 4122798, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19855, "raw_average_key_size": 20, "raw_value_size": 4103685, "raw_average_value_size": 4191, "num_data_blocks": 264, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931630, "oldest_key_time": 1763931630, "file_creation_time": 1763931851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 126835 microseconds, and 8026 cpu microseconds.
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.064365) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4132392 bytes OK
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.064385) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108180) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108233) EVENT_LOG_v1 {"time_micros": 1763931852108222, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.108259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6363185, prev total WAL file size 6363185, number of live WAL files 2.
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.110108) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4035KB)], [39(12MB)]
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852110161, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17511275, "oldest_snapshot_seqno": -1}
Nov 23 16:04:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5440 keys, 15313707 bytes, temperature: kUnknown
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852461447, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 15313707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15275021, "index_size": 23984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137266, "raw_average_key_size": 25, "raw_value_size": 15174232, "raw_average_value_size": 2789, "num_data_blocks": 991, "num_entries": 5440, "num_filter_entries": 5440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931852, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.461753) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15313707 bytes
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.481389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.8 rd, 43.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.8 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 5960, records dropped: 520 output_compression: NoCompression
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.481425) EVENT_LOG_v1 {"time_micros": 1763931852481411, "job": 22, "event": "compaction_finished", "compaction_time_micros": 351372, "compaction_time_cpu_micros": 26817, "output_level": 6, "num_output_files": 1, "total_output_size": 15313707, "num_input_records": 5960, "num_output_records": 5440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852482326, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931852485174, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.110004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:04:12.485279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:04:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:14.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:16.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:16 np0005532762 podman[231710]: 2025-11-23 21:04:16.649452587 +0000 UTC m=+0.064719744 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210420 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000022:nfs.cephfs.0: -2
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:04:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fc0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:04:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:04:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210426 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:04:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210426 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f980016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:04:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:04:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:04:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:34 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:36 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:37.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:37 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:04:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:37 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:04:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:38 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:04:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:04:40 np0005532762 podman[231784]: 2025-11-23 21:04:40.638340641 +0000 UTC m=+0.050327487 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:04:40 np0005532762 podman[231783]: 2025-11-23 21:04:40.677931958 +0000 UTC m=+0.092841132 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 23 16:04:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:40 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:04:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:42 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:44 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210446 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:04:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:46 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:47 np0005532762 podman[231857]: 2025-11-23 21:04:47.637111181 +0000 UTC m=+0.056836372 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 16:04:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:48 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:50 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.061 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:04:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f9c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:52 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:53.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:54 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:55.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:55 np0005532762 nova_compute[230183]: 2025-11-23 21:04:55.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:56 np0005532762 nova_compute[230183]: 2025-11-23 21:04:56.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:04:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:04:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:56 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.445 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.446 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.447 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:04:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:04:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3199479046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:04:57 np0005532762 nova_compute[230183]: 2025-11-23 21:04:57.891 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.030 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.031 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5274MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.032 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.033 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.087 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.087 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.101 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:04:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:04:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:04:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:04:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:04:58 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348488227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.554 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.559 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.576 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.578 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:04:58 np0005532762 nova_compute[230183]: 2025-11-23 21:04:58.578 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:04:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:04:58 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:04:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:04:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:04:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:04:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:04:59 np0005532762 nova_compute[230183]: 2025-11-23 21:04:59.579 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:04:59 np0005532762 nova_compute[230183]: 2025-11-23 21:04:59.579 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:04:59 np0005532762 nova_compute[230183]: 2025-11-23 21:04:59.580 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:04:59 np0005532762 nova_compute[230183]: 2025-11-23 21:04:59.591 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:05:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:00 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:01 np0005532762 podman[232053]: 2025-11-23 21:05:01.698482409 +0000 UTC m=+0.061213561 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:05:01 np0005532762 podman[232053]: 2025-11-23 21:05:01.795625686 +0000 UTC m=+0.158356858 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:05:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:02 np0005532762 podman[232173]: 2025-11-23 21:05:02.207540944 +0000 UTC m=+0.053726099 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:05:02 np0005532762 podman[232173]: 2025-11-23 21:05:02.213202467 +0000 UTC m=+0.059387602 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:05:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:05:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:05:02 np0005532762 podman[232265]: 2025-11-23 21:05:02.517779022 +0000 UTC m=+0.048443436 container exec 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 23 16:05:02 np0005532762 podman[232265]: 2025-11-23 21:05:02.529249511 +0000 UTC m=+0.059913895 container exec_died 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:05:02 np0005532762 podman[232329]: 2025-11-23 21:05:02.744685875 +0000 UTC m=+0.060625364 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:05:02 np0005532762 podman[232329]: 2025-11-23 21:05:02.75820572 +0000 UTC m=+0.074145179 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:05:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:02 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:03 np0005532762 podman[232395]: 2025-11-23 21:05:03.018245756 +0000 UTC m=+0.073475981 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 16:05:03 np0005532762 podman[232395]: 2025-11-23 21:05:03.055180081 +0000 UTC m=+0.110410216 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=)
Nov 23 16:05:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f98003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:04.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:04 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:05:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:05 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:05:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fbc003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:06 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:08 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:05:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:10 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:11.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:11 np0005532762 podman[232569]: 2025-11-23 21:05:11.652034095 +0000 UTC m=+0.065926337 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:05:11 np0005532762 podman[232568]: 2025-11-23 21:05:11.704606452 +0000 UTC m=+0.118314889 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:05:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:12 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:14 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:15.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:05:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:16.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:05:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:16 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:17 np0005532762 podman[232619]: 2025-11-23 21:05:17.74423279 +0000 UTC m=+0.067421727 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 23 16:05:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210518 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:05:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:18.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:18 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:20.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:20 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:05:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:21.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:05:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:22.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:22 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:24.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:24 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:25.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:26 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:27 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:05:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:28 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:05:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:05:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fa4004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:30 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7fb0003a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f90003ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[231664]: 23/11/2025 21:05:32 : epoch 692376ca : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7f8c003c10 fd 48 proxy ignored for local
Nov 23 16:05:32 np0005532762 kernel: ganesha.nfsd[232539]: segfault at 50 ip 00007f806faca32e sp 00007f8026ffc210 error 4 in libntirpc.so.5.8[7f806faaf000+2c000] likely on CPU 6 (core 0, socket 6)
Nov 23 16:05:32 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:05:32 np0005532762 systemd[1]: Started Process Core Dump (PID 232671/UID 0).
Nov 23 16:05:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:33 np0005532762 systemd-coredump[232672]: Process 231668 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007f806faca32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:05:33 np0005532762 systemd[1]: systemd-coredump@10-232671-0.service: Deactivated successfully.
Nov 23 16:05:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:33 np0005532762 podman[232677]: 2025-11-23 21:05:33.487627836 +0000 UTC m=+0.028166440 container died 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 23 16:05:33 np0005532762 systemd[1]: var-lib-containers-storage-overlay-c8959c0210da2eb362aef70369d12cda156853952a6808f0124f32ae90a3c6af-merged.mount: Deactivated successfully.
Nov 23 16:05:33 np0005532762 podman[232677]: 2025-11-23 21:05:33.547811427 +0000 UTC m=+0.088350011 container remove 28e57aa42cd2ce22aa90d15386d4b9c6fbc4c7148e89ce4a602828fc7daf9d0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:05:33 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:05:33 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:05:33 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.324s CPU time.
Nov 23 16:05:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:34.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:35.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:36.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210538 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 142ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:05:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210538 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:05:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:38.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:39.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - - [23/Nov/2025:21:05:39.955 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 23 16:05:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:42 np0005532762 podman[232729]: 2025-11-23 21:05:42.698688408 +0000 UTC m=+0.085786501 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:05:42 np0005532762 podman[232728]: 2025-11-23 21:05:42.760926845 +0000 UTC m=+0.149207560 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:05:43 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 11.
Nov 23 16:05:43 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:05:43 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.324s CPU time.
Nov 23 16:05:43 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:05:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:43.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:44 np0005532762 podman[232845]: 2025-11-23 21:05:44.061571547 +0000 UTC m=+0.044455118 container create a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 23 16:05:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:44 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:05:44 np0005532762 podman[232845]: 2025-11-23 21:05:44.127135443 +0000 UTC m=+0.110019034 container init a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 23 16:05:44 np0005532762 podman[232845]: 2025-11-23 21:05:44.133375552 +0000 UTC m=+0.116259123 container start a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:05:44 np0005532762 bash[232845]: a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2
Nov 23 16:05:44 np0005532762 podman[232845]: 2025-11-23 21:05:44.04384693 +0000 UTC m=+0.026730521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:05:44 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:05:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:44 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:05:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:44.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.371179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945371223, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1187, "num_deletes": 251, "total_data_size": 2796786, "memory_usage": 2842576, "flush_reason": "Manual Compaction"}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945379401, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1182461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23231, "largest_seqno": 24413, "table_properties": {"data_size": 1178229, "index_size": 1820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10920, "raw_average_key_size": 20, "raw_value_size": 1169096, "raw_average_value_size": 2189, "num_data_blocks": 78, "num_entries": 534, "num_filter_entries": 534, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931853, "oldest_key_time": 1763931853, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8302 microseconds, and 4314 cpu microseconds.
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379471) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1182461 bytes OK
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.379509) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380922) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380969) EVENT_LOG_v1 {"time_micros": 1763931945380959, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.380994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2791094, prev total WAL file size 2791094, number of live WAL files 2.
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.382274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1154KB)], [42(14MB)]
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945382328, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16496168, "oldest_snapshot_seqno": -1}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5491 keys, 13037002 bytes, temperature: kUnknown
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945538552, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13037002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13001258, "index_size": 20914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 138625, "raw_average_key_size": 25, "raw_value_size": 12902921, "raw_average_value_size": 2349, "num_data_blocks": 856, "num_entries": 5491, "num_filter_entries": 5491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931945, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.539034) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13037002 bytes
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.540634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.5 rd, 83.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 5974, records dropped: 483 output_compression: NoCompression
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.540660) EVENT_LOG_v1 {"time_micros": 1763931945540647, "job": 24, "event": "compaction_finished", "compaction_time_micros": 156408, "compaction_time_cpu_micros": 36323, "output_level": 6, "num_output_files": 1, "total_output_size": 13037002, "num_input_records": 5974, "num_output_records": 5491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945541299, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931945544896, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.382147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:05:45.545134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:05:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 23 16:05:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:46.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Nov 23 16:05:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:48 np0005532762 podman[232905]: 2025-11-23 21:05:48.64822139 +0000 UTC m=+0.060527452 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:05:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Nov 23 16:05:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:50 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:05:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:50 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:05:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.062 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:05:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:05:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:51.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:05:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:52.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:53.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Nov 23 16:05:55 np0005532762 nova_compute[230183]: 2025-11-23 21:05:55.435 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:05:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:55.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:05:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:56 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:56 np0005532762 nova_compute[230183]: 2025-11-23 21:05:56.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:56 np0005532762 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:56 np0005532762 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:56 np0005532762 nova_compute[230183]: 2025-11-23 21:05:56.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:05:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:05:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:57 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.424 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:05:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:05:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1302579223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:05:57 np0005532762 nova_compute[230183]: 2025-11-23 21:05:57.866 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:05:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:57.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.027 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.029 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5227MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.029 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.030 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.117 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.135 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:05:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:58 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb888000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210558 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:05:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:58 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:05:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:05:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:05:58 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1938272778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.611 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.615 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.633 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.634 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:05:58 np0005532762 nova_compute[230183]: 2025-11-23 21:05:58.635 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:05:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:05:59 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb88c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.635 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.635 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.636 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.657 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.657 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:59 np0005532762 nova_compute[230183]: 2025-11-23 21:05:59.658 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:05:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:05:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:05:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:05:59.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:00 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:00 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:00.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:01 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:06:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:06:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:02 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:02 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb88c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:03 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:04 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:04 np0005532762 kernel: ganesha.nfsd[232933]: segfault at 50 ip 00007fb95c13d32e sp 00007fb92a7fb210 error 4 in libntirpc.so.5.8[7fb95c122000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 23 16:06:04 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:06:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[232861]: 23/11/2025 21:06:04 : epoch 69237728 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8a00021d0 fd 38 proxy ignored for local
Nov 23 16:06:04 np0005532762 systemd[1]: Started Process Core Dump (PID 233018/UID 0).
Nov 23 16:06:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 23 16:06:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:04.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 23 16:06:05 np0005532762 systemd-coredump[233019]: Process 232865 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007fb95c13d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:06:05 np0005532762 systemd[1]: systemd-coredump@11-233018-0.service: Deactivated successfully.
Nov 23 16:06:05 np0005532762 systemd[1]: systemd-coredump@11-233018-0.service: Consumed 1.152s CPU time.
Nov 23 16:06:05 np0005532762 podman[233025]: 2025-11-23 21:06:05.707082754 +0000 UTC m=+0.044404512 container died a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:06:05 np0005532762 systemd[1]: var-lib-containers-storage-overlay-2e27abaefaba8199288bd5be063d902d57972d49e3076cc8c05a506a1ac3a488-merged.mount: Deactivated successfully.
Nov 23 16:06:05 np0005532762 podman[233025]: 2025-11-23 21:06:05.755076134 +0000 UTC m=+0.092397892 container remove a1edfbc64c688db19e55c818e2bdf0df61f7d0676d2c3acdb33415079a20ecc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 23 16:06:05 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:06:05 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:06:05 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.556s CPU time.
Nov 23 16:06:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:05.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:07.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:09.032 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:06:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:09.033 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:06:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:09.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210610 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:06:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 23 16:06:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 23 16:06:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:12 np0005532762 podman[233155]: 2025-11-23 21:06:12.983179193 +0000 UTC m=+0.064894460 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 23 16:06:13 np0005532762 podman[233154]: 2025-11-23 21:06:13.01982083 +0000 UTC m=+0.100983252 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 23 16:06:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:13.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:14 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:06:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:16.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:16 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 12.
Nov 23 16:06:16 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:06:16 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.556s CPU time.
Nov 23 16:06:16 np0005532762 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627...
Nov 23 16:06:16 np0005532762 podman[233250]: 2025-11-23 21:06:16.266973119 +0000 UTC m=+0.035694771 container create d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 23 16:06:16 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.fuxuha-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:16 np0005532762 podman[233250]: 2025-11-23 21:06:16.326031196 +0000 UTC m=+0.094752848 container init d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:06:16 np0005532762 podman[233250]: 2025-11-23 21:06:16.330479779 +0000 UTC m=+0.099201431 container start d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 23 16:06:16 np0005532762 bash[233250]: d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a
Nov 23 16:06:16 np0005532762 podman[233250]: 2025-11-23 21:06:16.252161778 +0000 UTC m=+0.020883460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:06:16 np0005532762 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 23 16:06:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 23 16:06:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:18 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:18.035 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:19 np0005532762 podman[233334]: 2025-11-23 21:06:19.223671794 +0000 UTC m=+0.072641454 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 23 16:06:19 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:19 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:06:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:20.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 23 16:06:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 23 16:06:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 23 16:06:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 23 16:06:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:29 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb144000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:31 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.059 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.060 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.085 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.231 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.231 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.241 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.242 230187 INFO nova.compute.claims [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:06:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.355 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210632 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 23 16:06:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:06:32 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/848485788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.811 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.817 230187 DEBUG nova.compute.provider_tree [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.834 230187 DEBUG nova.scheduler.client.report [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.862 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.863 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.931 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.932 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.951 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:06:32 np0005532762 nova_compute[230183]: 2025-11-23 21:06:32.969 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:06:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:33 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.069 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.070 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.071 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating image(s)#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.098 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.122 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.147 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.150 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.151 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.908 230187 WARNING oslo_policy.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.909 230187 WARNING oslo_policy.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.912 230187 DEBUG nova.policy [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:06:33 np0005532762 nova_compute[230183]: 2025-11-23 21:06:33.931 230187 DEBUG nova.virt.libvirt.imagebackend [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image locations are: [{'url': 'rbd://03808be8-ae4a-5548-82e6-4a294f1bc627/images/3c45fa6c-8a99-4359-a34e-d89f4e1e77d0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://03808be8-ae4a-5548-82e6-4a294f1bc627/images/3c45fa6c-8a99-4359-a34e-d89f4e1e77d0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 23 16:06:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.030 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:35 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.082 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.083 230187 DEBUG nova.virt.images [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] 3c45fa6c-8a99-4359-a34e-d89f4e1e77d0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.084 230187 DEBUG nova.privsep.utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.085 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.099 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Successfully created port: f23315bc-0f2d-4e45-91a2-0f72a8929b88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.391 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.part /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.395 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.444 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56.converted --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.445 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.471 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:35 np0005532762 nova_compute[230183]: 2025-11-23 21:06:35.474 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 b88f69cf-a706-408d-8dd0-9c891ac278df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.504165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995504207, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 255, "total_data_size": 1683854, "memory_usage": 1710824, "flush_reason": "Manual Compaction"}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995520330, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1093658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24418, "largest_seqno": 25227, "table_properties": {"data_size": 1089759, "index_size": 1679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8609, "raw_average_key_size": 18, "raw_value_size": 1081741, "raw_average_value_size": 2351, "num_data_blocks": 74, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931946, "oldest_key_time": 1763931946, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 16277 microseconds, and 5707 cpu microseconds.
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520439) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1093658 bytes OK
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.520488) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541748) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541807) EVENT_LOG_v1 {"time_micros": 1763931995541794, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.541833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1679569, prev total WAL file size 1679569, number of live WAL files 2.
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.543570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1068KB)], [45(12MB)]
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995543626, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 14130660, "oldest_snapshot_seqno": -1}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5424 keys, 13968182 bytes, temperature: kUnknown
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995811204, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13968182, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13931576, "index_size": 21968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138465, "raw_average_key_size": 25, "raw_value_size": 13833097, "raw_average_value_size": 2550, "num_data_blocks": 897, "num_entries": 5424, "num_filter_entries": 5424, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763931995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.811517) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13968182 bytes
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.843992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.8 rd, 52.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(25.7) write-amplify(12.8) OK, records in: 5951, records dropped: 527 output_compression: NoCompression
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.844020) EVENT_LOG_v1 {"time_micros": 1763931995844009, "job": 26, "event": "compaction_finished", "compaction_time_micros": 267666, "compaction_time_cpu_micros": 36772, "output_level": 6, "num_output_files": 1, "total_output_size": 13968182, "num_input_records": 5951, "num_output_records": 5424, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995844333, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763931995846184, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.543480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:35 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:06:35.846285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:06:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Nov 23 16:06:36 np0005532762 nova_compute[230183]: 2025-11-23 21:06:36.900 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Successfully updated port: f23315bc-0f2d-4e45-91a2-0f72a8929b88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:06:36 np0005532762 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:06:36 np0005532762 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:06:36 np0005532762 nova_compute[230183]: 2025-11-23 21:06:36.919 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:06:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:37 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:37 np0005532762 nova_compute[230183]: 2025-11-23 21:06:37.095 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:06:37 np0005532762 nova_compute[230183]: 2025-11-23 21:06:37.403 230187 DEBUG nova.compute.manager [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:06:37 np0005532762 nova_compute[230183]: 2025-11-23 21:06:37.404 230187 DEBUG nova.compute.manager [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:06:37 np0005532762 nova_compute[230183]: 2025-11-23 21:06:37.404 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:06:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:38 np0005532762 nova_compute[230183]: 2025-11-23 21:06:38.137 230187 DEBUG nova.network.neutron [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:06:38 np0005532762 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:06:38 np0005532762 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance network_info: |[{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:06:38 np0005532762 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:06:38 np0005532762 nova_compute[230183]: 2025-11-23 21:06:38.153 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:06:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:38.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:39 np0005532762 nova_compute[230183]: 2025-11-23 21:06:39.389 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:06:39 np0005532762 nova_compute[230183]: 2025-11-23 21:06:39.389 230187 DEBUG nova.network.neutron [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:06:39 np0005532762 nova_compute[230183]: 2025-11-23 21:06:39.401 230187 DEBUG oslo_concurrency.lockutils [req-345d58e9-898a-4a71-8f05-7b645ced737e req-ae82acbf-3479-437b-aa85-1f87e2911a12 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:06:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:39 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Nov 23 16:06:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.448 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 b88f69cf-a706-408d-8dd0-9c891ac278df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.511 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.609 230187 DEBUG nova.objects.instance [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.621 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.623 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Ensure instance console log exists: /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.624 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.627 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start _get_guest_xml network_info=[{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.633 230187 WARNING nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.638 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.639 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.641 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.642 230187 DEBUG nova.virt.libvirt.host [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.642 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.643 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:06:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.644 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:06:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:06:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:40.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.645 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.646 230187 DEBUG nova.virt.hardware [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.650 230187 DEBUG nova.privsep.utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 23 16:06:40 np0005532762 nova_compute[230183]: 2025-11-23 21:06:40.650 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:06:41 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4087426359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.076 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.103 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.106 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:41 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:06:41 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1092088785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.534 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.539 230187 DEBUG nova.virt.libvirt.vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:06:33Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.540 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.542 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.547 230187 DEBUG nova.objects.instance [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.572 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <uuid>b88f69cf-a706-408d-8dd0-9c891ac278df</uuid>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <name>instance-00000001</name>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-2101370279</nova:name>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:06:40</nova:creationTime>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <nova:port uuid="f23315bc-0f2d-4e45-91a2-0f72a8929b88">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="serial">b88f69cf-a706-408d-8dd0-9c891ac278df</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="uuid">b88f69cf-a706-408d-8dd0-9c891ac278df</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/b88f69cf-a706-408d-8dd0-9c891ac278df_disk">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:6f:7a:f0"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <target dev="tapf23315bc-0f"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/console.log" append="off"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:06:41 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:06:41 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:06:41 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:06:41 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.574 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Preparing to wait for external event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.575 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.576 230187 DEBUG nova.virt.libvirt.vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:06:33Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.576 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.577 230187 DEBUG nova.network.os_vif_util [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.577 230187 DEBUG os_vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.625 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.626 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.627 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.627 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.628 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.629 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.630 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.640 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.640 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.641 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.641 230187 INFO oslo.privsep.daemon [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp28m6et_r/privsep.sock']#033[00m
Nov 23 16:06:41 np0005532762 nova_compute[230183]: 2025-11-23 21:06:41.767 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.405 230187 INFO oslo.privsep.daemon [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.266 233675 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.271 233675 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.274 233675 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.274 233675 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233675#033[00m
Nov 23 16:06:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.723 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.724 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf23315bc-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.724 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf23315bc-0f, col_values=(('external_ids', {'iface-id': 'f23315bc-0f2d-4e45-91a2-0f72a8929b88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:7a:f0', 'vm-uuid': 'b88f69cf-a706-408d-8dd0-9c891ac278df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.771 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:42 np0005532762 NetworkManager[49021]: <info>  [1763932002.7724] manager: (tapf23315bc-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.773 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.782 230187 INFO os_vif [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f')#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.830 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.831 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.832 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:6f:7a:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.833 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Using config drive#033[00m
Nov 23 16:06:42 np0005532762 nova_compute[230183]: 2025-11-23 21:06:42.875 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:43 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:43 np0005532762 podman[233725]: 2025-11-23 21:06:43.629596418 +0000 UTC m=+0.057059772 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 23 16:06:43 np0005532762 podman[233724]: 2025-11-23 21:06:43.660717151 +0000 UTC m=+0.088673329 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 23 16:06:43 np0005532762 nova_compute[230183]: 2025-11-23 21:06:43.890 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Creating config drive at /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config#033[00m
Nov 23 16:06:43 np0005532762 nova_compute[230183]: 2025-11-23 21:06:43.895 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gqvvj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.043 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45gqvvj7" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.074 230187 DEBUG nova.storage.rbd_utils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.076 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.237 230187 DEBUG oslo_concurrency.processutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config b88f69cf-a706-408d-8dd0-9c891ac278df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.238 230187 INFO nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deleting local config drive /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df/disk.config because it was imported into RBD.#033[00m
Nov 23 16:06:44 np0005532762 systemd[1]: Starting libvirt secret daemon...
Nov 23 16:06:44 np0005532762 systemd[1]: Started libvirt secret daemon.
Nov 23 16:06:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:44 np0005532762 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 23 16:06:44 np0005532762 kernel: tapf23315bc-0f: entered promiscuous mode
Nov 23 16:06:44 np0005532762 NetworkManager[49021]: <info>  [1763932004.3244] manager: (tapf23315bc-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 23 16:06:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:44Z|00027|binding|INFO|Claiming lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 for this chassis.
Nov 23 16:06:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:44Z|00028|binding|INFO|f23315bc-0f2d-4e45-91a2-0f72a8929b88: Claiming fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.325 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.342 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:06:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.343 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 bound to our chassis#033[00m
Nov 23 16:06:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.344 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aadcd86-30a0-48ed-988a-324cae3af3e6#033[00m
Nov 23 16:06:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.346 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpfyh33cj_/privsep.sock']#033[00m
Nov 23 16:06:44 np0005532762 systemd-udevd[233844]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:06:44 np0005532762 NetworkManager[49021]: <info>  [1763932004.3725] device (tapf23315bc-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:06:44 np0005532762 NetworkManager[49021]: <info>  [1763932004.3733] device (tapf23315bc-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:06:44 np0005532762 systemd-machined[193469]: New machine qemu-1-instance-00000001.
Nov 23 16:06:44 np0005532762 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.402 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:44Z|00029|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 ovn-installed in OVS
Nov 23 16:06:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:44Z|00030|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 up in Southbound
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.412 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:44.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.706 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932004.7056391, b88f69cf-a706-408d-8dd0-9c891ac278df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.706 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Started (Lifecycle Event)#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.741 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.744 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932004.7096562, b88f69cf-a706-408d-8dd0-9c891ac278df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.744 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.756 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.759 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:06:44 np0005532762 nova_compute[230183]: 2025-11-23 21:06:44.771 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.027 142158 INFO oslo_service.service [-] Child 233840 exited with status 0#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.028 142158 WARNING oslo_service.service [-] pid 233840 not in child list#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.032 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.033 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfyh33cj_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.910 233901 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.915 233901 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.918 233901 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:44.918 233901 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233901#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.035 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7fda8c-6a1c-41ed-8c2a-af8921a0f52e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.088 230187 DEBUG nova.compute.manager [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.088 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG oslo_concurrency.lockutils [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.089 230187 DEBUG nova.compute.manager [req-e17bd455-e319-417e-a0d7-49be30e566db req-1d68124b-289f-4e37-a0a6-794135a5ef38 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Processing event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.090 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.093 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.095 230187 INFO nova.virt.libvirt.driver [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance spawned successfully.#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.095 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.101 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932005.1013858, b88f69cf-a706-408d-8dd0-9c891ac278df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.102 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.118 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.118 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.119 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.119 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.120 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.120 230187 DEBUG nova.virt.libvirt.driver [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.125 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.129 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.149 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.173 230187 INFO nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 12.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.174 230187 DEBUG nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.234 230187 INFO nova.compute.manager [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 13.03 seconds to build instance.#033[00m
Nov 23 16:06:45 np0005532762 nova_compute[230183]: 2025-11-23 21:06:45.261 230187 DEBUG oslo_concurrency.lockutils [None req-3fb409d5-7f3e-4901-9316-aab33950dbe9 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:45 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:45 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:45.847 233901 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.634 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b01960fe-cfa7-4186-9d49-0519c4438fce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.635 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aadcd86-31 in ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.637 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aadcd86-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.637 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b99ab6b0-5ac6-4556-840d-f064d5ab9107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.641 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[95453304-76f1-4c5b-805e-8ddaf2b9e254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.662 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1925a6-db4b-4a59-923f-671b185b2025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.695 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa8efde-9f7a-4ce9-aaa5-2c04746bdba5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:46 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:46.697 142158 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpuwkaw7zn/privsep.sock']#033[00m
Nov 23 16:06:46 np0005532762 nova_compute[230183]: 2025-11-23 21:06:46.769 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.294 230187 DEBUG nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.295 230187 DEBUG oslo_concurrency.lockutils [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.296 230187 DEBUG nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.296 230187 WARNING nova.compute.manager [req-7243a388-ad7d-4362-ac71-e20f4a11bc80 req-8fdc5a8a-01d3-4d75-a897-bc221ac05d45 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state active and task_state None.#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.444 142158 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.444 142158 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuwkaw7zn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.313 233916 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.318 233916 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.320 233916 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.321 233916 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233916#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.447 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[f54965c4-26e2-4720-b1a8-08bd6e8a4333]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:47 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:47 np0005532762 nova_compute[230183]: 2025-11-23 21:06:47.772 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.968 233916 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.968 233916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:47 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:47.969 233916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:48.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.195 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.1963] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.1971] device (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.1986] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.1991] device (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.2003] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.2011] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.2016] device (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.2021] device (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.222 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.226 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.574 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ace8fd5-de8b-442a-93af-e11be78fa481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.5944] manager: (tap7aadcd86-30): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.593 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[53201a1d-d6aa-431a-9364-0fb4324ebeca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 systemd-udevd[233930]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.626 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cdeb26e7-dc06-4f2c-9e94-1d44839cd664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.629 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[040e1525-9226-494a-832b-c16a9c18b9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.6545] device (tap7aadcd86-30): carrier: link connected
Nov 23 16:06:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.663 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e78fa-08ce-4641-99da-10af232102c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.679 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1813bdf2-3375-4555-ab3d-8323f2b05726]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aadcd86-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:e4:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395722, 'reachable_time': 27230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233948, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.693 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3f47b221-ed6a-4cdc-b216-41a9d9c3a897]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:e450'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 395722, 'tstamp': 395722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233949, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.708 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1026876d-29cb-4255-b27b-135b2bb29d63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aadcd86-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:e4:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395722, 'reachable_time': 27230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233950, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.736 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0e18c4f3-362a-4cb7-a321-aeff69e73350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.794 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[effda4c6-5c13-4608-a25e-c3af8849e8ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.796 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aadcd86-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.797 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.797 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aadcd86-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:48 np0005532762 kernel: tap7aadcd86-30: entered promiscuous mode
Nov 23 16:06:48 np0005532762 NetworkManager[49021]: <info>  [1763932008.8000] manager: (tap7aadcd86-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.799 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.803 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aadcd86-30, col_values=(('external_ids', {'iface-id': 'aeecf50b-036b-450d-8620-c40267ec9fc6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:06:48 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:48Z|00031|binding|INFO|Releasing lport aeecf50b-036b-450d-8620-c40267ec9fc6 from this chassis (sb_readonly=0)
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.804 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.832 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.835 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.837 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[39776a6d-49b9-46e2-9097-5dd76e7fb27c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.839 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-7aadcd86-30a0-48ed-988a-324cae3af3e6
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/7aadcd86-30a0-48ed-988a-324cae3af3e6.pid.haproxy
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 7aadcd86-30a0-48ed-988a-324cae3af3e6
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:06:48 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:48.840 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'env', 'PROCESS_TAG=haproxy-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aadcd86-30a0-48ed-988a-324cae3af3e6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG nova.compute.manager [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG nova.compute.manager [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.935 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.936 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:06:48 np0005532762 nova_compute[230183]: 2025-11-23 21:06:48.936 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:06:49 np0005532762 podman[233982]: 2025-11-23 21:06:49.232374829 +0000 UTC m=+0.047461497 container create 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:06:49 np0005532762 systemd[1]: Started libpod-conmon-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope.
Nov 23 16:06:49 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:06:49 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c5263137e4ec5459bb96e030a6c0c4608606faf4bf3953f654301e7654b612/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:06:49 np0005532762 podman[233982]: 2025-11-23 21:06:49.294172852 +0000 UTC m=+0.109259530 container init 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:06:49 np0005532762 podman[233982]: 2025-11-23 21:06:49.299937132 +0000 UTC m=+0.115023800 container start 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 16:06:49 np0005532762 podman[233982]: 2025-11-23 21:06:49.206710498 +0000 UTC m=+0.021797186 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:06:49 np0005532762 podman[233995]: 2025-11-23 21:06:49.338290696 +0000 UTC m=+0.066905536 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 16:06:49 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : New worker (234022) forked
Nov 23 16:06:49 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : Loading success.
Nov 23 16:06:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:49 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Nov 23 16:06:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:50 np0005532762 nova_compute[230183]: 2025-11-23 21:06:50.955 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:06:50 np0005532762 nova_compute[230183]: 2025-11-23 21:06:50.955 230187 DEBUG nova.network.neutron [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:06:50 np0005532762 nova_compute[230183]: 2025-11-23 21:06:50.976 230187 DEBUG oslo_concurrency.lockutils [req-72284170-e7fb-47fd-bcb2-8ff1815a4897 req-1db0de17-ee36-48ab-bc99-32a7bcb05de1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:06:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:06:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.063 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:06:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:06:51.064 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:06:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:51 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:51 np0005532762 nova_compute[230183]: 2025-11-23 21:06:51.772 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:06:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:06:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:52.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:52 np0005532762 nova_compute[230183]: 2025-11-23 21:06:52.823 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:53 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:54.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:54.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.452 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.453 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.453 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:06:55 np0005532762 nova_compute[230183]: 2025-11-23 21:06:55.466 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:55 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:56 np0005532762 nova_compute[230183]: 2025-11-23 21:06:56.482 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:56 np0005532762 nova_compute[230183]: 2025-11-23 21:06:56.483 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:06:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:06:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:56.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:56 np0005532762 nova_compute[230183]: 2025-11-23 21:06:56.774 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:57 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:06:57 np0005532762 nova_compute[230183]: 2025-11-23 21:06:57.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:57 np0005532762 nova_compute[230183]: 2025-11-23 21:06:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:57 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:57 np0005532762 nova_compute[230183]: 2025-11-23 21:06:57.880 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:06:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:06:58.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1300027c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:06:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:06:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:06:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:06:58.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:06:58 np0005532762 nova_compute[230183]: 2025-11-23 21:06:58.913 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:06:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 16:06:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:06:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 16:06:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:06:59 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.201 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.225 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.225 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.226 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.227 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.271 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.272 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003ce0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:00 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:00 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2909697320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.715 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.781 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.782 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.928 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4835MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:00 np0005532762 nova_compute[230183]: 2025-11-23 21:07:00.929 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.050 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance b88f69cf-a706-408d-8dd0-9c891ac278df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.051 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.051 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.121 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.206 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.207 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.231 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.258 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.298 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:01 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:01 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:01 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3596254833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.739 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.746 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.777 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updated inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating resource provider bb217351-d4c8-44a4-9137-08393a1f72bc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.786 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.811 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:07:01 np0005532762 nova_compute[230183]: 2025-11-23 21:07:01.811 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:02 np0005532762 nova_compute[230183]: 2025-11-23 21:07:02.012 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:02 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb1180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:02 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:02 np0005532762 nova_compute[230183]: 2025-11-23 21:07:02.918 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:03 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:03 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:04 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114000d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:04 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:04 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:04.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.091 230187 INFO nova.compute.manager [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Get console output#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.096 230187 INFO oslo.privsep.daemon [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpmmlmp56h/privsep.sock']#033[00m
Nov 23 16:07:05 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:05 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.758 230187 INFO oslo.privsep.daemon [None req-4b9dad6f-8c27-41f2-9c7e-8ab6bb5d4d3b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.639 234120 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.642 234120 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.644 234120 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.644 234120 INFO oslo.privsep.daemon [-] privsep daemon running as pid 234120#033[00m
Nov 23 16:07:05 np0005532762 nova_compute[230183]: 2025-11-23 21:07:05.853 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:07:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:06 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:06 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:06 np0005532762 nova_compute[230183]: 2025-11-23 21:07:06.779 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:07 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:07 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:07 np0005532762 nova_compute[230183]: 2025-11-23 21:07:07.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:08.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:08 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:08 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:08 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:09 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:09 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:10 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:10 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:10 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:11.218 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:07:11 np0005532762 nova_compute[230183]: 2025-11-23 21:07:11.219 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:11.220 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:07:11 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:11 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:11 np0005532762 nova_compute[230183]: 2025-11-23 21:07:11.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:12 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114001840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:12 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:12 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:12.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:12 np0005532762 nova_compute[230183]: 2025-11-23 21:07:12.962 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:13 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:13 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:14 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:14 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:14 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:14 np0005532762 podman[234127]: 2025-11-23 21:07:14.65636057 +0000 UTC m=+0.061196937 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 23 16:07:14 np0005532762 podman[234126]: 2025-11-23 21:07:14.693772198 +0000 UTC m=+0.098378898 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 16:07:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:15 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:15 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:16 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:16 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:16 np0005532762 nova_compute[230183]: 2025-11-23 21:07:16.782 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:17 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:17 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:17 np0005532762 nova_compute[230183]: 2025-11-23 21:07:17.965 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:18 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:18.222 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:07:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:18 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:18 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:18 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:18.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:19 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:19 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114002cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:19 np0005532762 podman[234198]: 2025-11-23 21:07:19.504889755 +0000 UTC m=+0.069495468 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 16:07:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:20 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:20 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:20 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:07:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:07:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:20.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:21 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:21 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:21 np0005532762 nova_compute[230183]: 2025-11-23 21:07:21.785 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:22 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:22 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:22 np0005532762 nova_compute[230183]: 2025-11-23 21:07:22.968 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:23 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:23 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:24 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb114003dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:24 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:24 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:24 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:24 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:07:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:25 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:25 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:26 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:26 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:26 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:26 np0005532762 nova_compute[230183]: 2025-11-23 21:07:26.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:27 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:27 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:27 np0005532762 nova_compute[230183]: 2025-11-23 21:07:27.988 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:28 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:28 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:29 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:29 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:30 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:30 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:31 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:31 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:31 np0005532762 nova_compute[230183]: 2025-11-23 21:07:31.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:32 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:32.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:33 np0005532762 nova_compute[230183]: 2025-11-23 21:07:33.035 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:33 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:33 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:34 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:34 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:34 np0005532762 nova_compute[230183]: 2025-11-23 21:07:34.605 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:35 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:35 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:36 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:36 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:36.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:36 np0005532762 nova_compute[230183]: 2025-11-23 21:07:36.792 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:37 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:37 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:38 np0005532762 nova_compute[230183]: 2025-11-23 21:07:38.080 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:38 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:38 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.659924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058659962, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 897, "num_deletes": 251, "total_data_size": 1859380, "memory_usage": 1888000, "flush_reason": "Manual Compaction"}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058672151, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1227461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25232, "largest_seqno": 26124, "table_properties": {"data_size": 1223306, "index_size": 1871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9656, "raw_average_key_size": 19, "raw_value_size": 1214752, "raw_average_value_size": 2504, "num_data_blocks": 83, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763931996, "oldest_key_time": 1763931996, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12315 microseconds, and 3973 cpu microseconds.
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.672222) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1227461 bytes OK
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.672266) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674320) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674336) EVENT_LOG_v1 {"time_micros": 1763932058674332, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1854806, prev total WAL file size 1854806, number of live WAL files 2.
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.675047) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1198KB)], [48(13MB)]
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058675113, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 15195643, "oldest_snapshot_seqno": -1}
Nov 23 16:07:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5389 keys, 13042783 bytes, temperature: kUnknown
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058767380, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 13042783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13007149, "index_size": 21060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138501, "raw_average_key_size": 25, "raw_value_size": 12909950, "raw_average_value_size": 2395, "num_data_blocks": 855, "num_entries": 5389, "num_filter_entries": 5389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932058, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.767597) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13042783 bytes
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.776961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.6 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 5909, records dropped: 520 output_compression: NoCompression
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.776995) EVENT_LOG_v1 {"time_micros": 1763932058776979, "job": 28, "event": "compaction_finished", "compaction_time_micros": 92336, "compaction_time_cpu_micros": 48916, "output_level": 6, "num_output_files": 1, "total_output_size": 13042783, "num_input_records": 5909, "num_output_records": 5389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058777439, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932058779487, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.674926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:38 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:07:38.779627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:07:39 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:39 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120002a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:40 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:40 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003de0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:41 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:41 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:41 np0005532762 nova_compute[230183]: 2025-11-23 21:07:41.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:42 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:42 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:42.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:43 np0005532762 nova_compute[230183]: 2025-11-23 21:07:43.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:43 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:43 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:44.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:44 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:44 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:44.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:45 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:45 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:45 np0005532762 podman[234365]: 2025-11-23 21:07:45.654426975 +0000 UTC m=+0.059459749 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 23 16:07:45 np0005532762 podman[234364]: 2025-11-23 21:07:45.69462676 +0000 UTC m=+0.095588301 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 16:07:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003f70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:46 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:46 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124003fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:46.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:46 np0005532762 nova_compute[230183]: 2025-11-23 21:07:46.797 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:47 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:47 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:48.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:48 np0005532762 nova_compute[230183]: 2025-11-23 21:07:48.146 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:48 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:48 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130002a20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:48.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:49 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:49 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:49 np0005532762 podman[234411]: 2025-11-23 21:07:49.664773542 +0000 UTC m=+0.076437051 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:07:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:07:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:07:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:50 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:50 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.064 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.065 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:51.065 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:51 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:51 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:51 np0005532762 nova_compute[230183]: 2025-11-23 21:07:51.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:52 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:52 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:53 np0005532762 nova_compute[230183]: 2025-11-23 21:07:53.149 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:53 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:53 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003fd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:54 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:54 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb124004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:54.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:54Z|00032|binding|INFO|Releasing lport aeecf50b-036b-450d-8620-c40267ec9fc6 from this chassis (sb_readonly=0)
Nov 23 16:07:54 np0005532762 nova_compute[230183]: 2025-11-23 21:07:54.913 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.436 230187 DEBUG nova.compute.manager [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG nova.compute.manager [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing instance network info cache due to event network-changed-f23315bc-0f2d-4e45-91a2-0f72a8929b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.437 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Refreshing network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.520 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:55 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:55 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.520 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.521 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.521 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.522 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.523 230187 INFO nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Terminating instance#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.524 230187 DEBUG nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:07:55 np0005532762 kernel: tapf23315bc-0f (unregistering): left promiscuous mode
Nov 23 16:07:55 np0005532762 NetworkManager[49021]: <info>  [1763932075.5846] device (tapf23315bc-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.593 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00033|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=0)
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00034|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down in Southbound
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00035|binding|INFO|Removing iface tapf23315bc-0f ovn-installed in OVS
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.597 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.604 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.605 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.607 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.608 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8759fe-b200-4ce7-bdf0-ea5ae9567bf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.609 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 namespace which is not needed anymore#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.613 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 23 16:07:55 np0005532762 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.590s CPU time.
Nov 23 16:07:55 np0005532762 systemd-machined[193469]: Machine qemu-1-instance-00000001 terminated.
Nov 23 16:07:55 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : haproxy version is 2.8.14-c23fe91
Nov 23 16:07:55 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [NOTICE]   (234013) : path to executable is /usr/sbin/haproxy
Nov 23 16:07:55 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [WARNING]  (234013) : Exiting Master process...
Nov 23 16:07:55 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [ALERT]    (234013) : Current worker (234022) exited with code 143 (Terminated)
Nov 23 16:07:55 np0005532762 neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6[233999]: [WARNING]  (234013) : All workers exited. Exiting... (0)
Nov 23 16:07:55 np0005532762 systemd[1]: libpod-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope: Deactivated successfully.
Nov 23 16:07:55 np0005532762 kernel: tapf23315bc-0f: entered promiscuous mode
Nov 23 16:07:55 np0005532762 kernel: tapf23315bc-0f (unregistering): left promiscuous mode
Nov 23 16:07:55 np0005532762 podman[234460]: 2025-11-23 21:07:55.741544572 +0000 UTC m=+0.041453340 container died 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:07:55 np0005532762 NetworkManager[49021]: <info>  [1763932075.7433] manager: (tapf23315bc-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00036|binding|INFO|Claiming lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 for this chassis.
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00037|binding|INFO|f23315bc-0f2d-4e45-91a2-0f72a8929b88: Claiming fa:16:3e:6f:7a:f0 10.100.0.10
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.751 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.753 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.766 230187 INFO nova.virt.libvirt.driver [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Instance destroyed successfully.#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.766 230187 DEBUG nova.objects.instance [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid b88f69cf-a706-408d-8dd0-9c891ac278df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.770 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00038|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 ovn-installed in OVS
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00039|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 up in Southbound
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00040|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=1)
Nov 23 16:07:55 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6-userdata-shm.mount: Deactivated successfully.
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00041|if_status|INFO|Not setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down as sb is readonly
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00042|binding|INFO|Removing iface tapf23315bc-0f ovn-installed in OVS
Nov 23 16:07:55 np0005532762 systemd[1]: var-lib-containers-storage-overlay-b4c5263137e4ec5459bb96e030a6c0c4608606faf4bf3953f654301e7654b612-merged.mount: Deactivated successfully.
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00043|binding|INFO|Releasing lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 from this chassis (sb_readonly=0)
Nov 23 16:07:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:07:55Z|00044|binding|INFO|Setting lport f23315bc-0f2d-4e45-91a2-0f72a8929b88 down in Southbound
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.785 230187 DEBUG nova.virt.libvirt.vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:06:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2101370279',display_name='tempest-TestNetworkBasicOps-server-2101370279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2101370279',id=1,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkopXsVozaBPjiL+h6NejRz4cW0k9/uA5JpHUVBsNmasGNuNCs7C0SGQ6LkonC2lifS0mLNUtTMnfgtFGQBRj5+CsXOBseSmB+++OQ3W87ZPdTUTnkg9uBrGbnjrus9+A==',key_name='tempest-TestNetworkBasicOps-1595128200',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:06:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-97azc21p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:06:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=b88f69cf-a706-408d-8dd0-9c891ac278df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.785 230187 DEBUG nova.network.os_vif_util [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.786 230187 DEBUG nova.network.os_vif_util [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.787 230187 DEBUG os_vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.788 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:7a:f0 10.100.0.10'], port_security=['fa:16:3e:6f:7a:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b88f69cf-a706-408d-8dd0-9c891ac278df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2fd5313-3792-44d3-ba44-78e423066c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33862b60-f5fc-47c1-8327-a9c7a8a97ff8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=f23315bc-0f2d-4e45-91a2-0f72a8929b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.790 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23315bc-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.792 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:07:55 np0005532762 podman[234460]: 2025-11-23 21:07:55.792017541 +0000 UTC m=+0.091926309 container cleanup 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.794 230187 INFO os_vif [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:7a:f0,bridge_name='br-int',has_traffic_filtering=True,id=f23315bc-0f2d-4e45-91a2-0f72a8929b88,network=Network(7aadcd86-30a0-48ed-988a-324cae3af3e6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf23315bc-0f')#033[00m
Nov 23 16:07:55 np0005532762 systemd[1]: libpod-conmon-788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6.scope: Deactivated successfully.
Nov 23 16:07:55 np0005532762 podman[234500]: 2025-11-23 21:07:55.873815349 +0000 UTC m=+0.057067634 container remove 788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.883 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[82b4039e-b4b2-406c-ab35-80d55b062718]: (4, ('Sun Nov 23 09:07:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 (788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6)\n788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6\nSun Nov 23 09:07:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 (788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6)\n788cac40ba3f928f4c7a08498ffacecf01fe20fcd3dacb3d0a6c5ef868aea5d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.885 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[af728f66-5943-4a37-a338-01e47a8d02da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.886 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aadcd86-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:07:55 np0005532762 kernel: tap7aadcd86-30: left promiscuous mode
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.892 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.894 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7cbdd0-d31d-4c11-8788-1bf7f84b10c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 nova_compute[230183]: 2025-11-23 21:07:55.902 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.908 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a4694ce6-7d56-4a8d-9fe1-7dd272908c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.910 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5e12edf8-4c66-4cff-9263-9eb665d3f5e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.924 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccb1537-af54-4aa0-84b5-c5c37209bb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395714, 'reachable_time': 29894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234526, 'error': None, 'target': 'ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.936 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aadcd86-30a0-48ed-988a-324cae3af3e6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:07:55 np0005532762 systemd[1]: run-netns-ovnmeta\x2d7aadcd86\x2d30a0\x2d48ed\x2d988a\x2d324cae3af3e6.mount: Deactivated successfully.
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.936 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[6e42b2c8-bdc4-4805-9165-07af1b2666ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.938 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.939 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.940 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ab74d-0e01-4715-b901-4759666a7dba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.940 142158 INFO neutron.agent.ovn.metadata.agent [-] Port f23315bc-0f2d-4e45-91a2-0f72a8929b88 in datapath 7aadcd86-30a0-48ed-988a-324cae3af3e6 unbound from our chassis#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.941 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aadcd86-30a0-48ed-988a-324cae3af3e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:07:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:07:55.942 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ad4dfe-5bfd-42e4-b082-c47b36f19123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:07:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.196 230187 INFO nova.virt.libvirt.driver [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deleting instance files /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df_del#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.197 230187 INFO nova.virt.libvirt.driver [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deletion of /var/lib/nova/instances/b88f69cf-a706-408d-8dd0-9c891ac278df_del complete#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.256 230187 DEBUG nova.virt.libvirt.host [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.257 230187 INFO nova.virt.libvirt.host [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] UEFI support detected#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.258 230187 INFO nova.compute.manager [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.259 230187 DEBUG oslo.service.loopingcall [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.259 230187 DEBUG nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.260 230187 DEBUG nova.network.neutron [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:07:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:56 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:56 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.623 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updated VIF entry in instance network info cache for port f23315bc-0f2d-4e45-91a2-0f72a8929b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.624 230187 DEBUG nova.network.neutron [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [{"id": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "address": "fa:16:3e:6f:7a:f0", "network": {"id": "7aadcd86-30a0-48ed-988a-324cae3af3e6", "bridge": "br-int", "label": "tempest-network-smoke--57523881", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf23315bc-0f", "ovs_interfaceid": "f23315bc-0f2d-4e45-91a2-0f72a8929b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.641 230187 DEBUG oslo_concurrency.lockutils [req-d3a780de-ddd5-46c8-9ead-3ee9ea347c12 req-b57ce57d-daef-4102-89c7-6bac34bd8d09 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-b88f69cf-a706-408d-8dd0-9c891ac278df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.759 230187 DEBUG nova.network.neutron [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:07:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:56.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.835 230187 INFO nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Took 0.58 seconds to deallocate network for instance.#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.876 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.877 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:56 np0005532762 nova_compute[230183]: 2025-11-23 21:07:56.929 230187 DEBUG oslo_concurrency.processutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131247952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.384 230187 DEBUG oslo_concurrency.processutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.394 230187 DEBUG nova.compute.provider_tree [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.410 230187 DEBUG nova.scheduler.client.report [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.434 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.477 230187 INFO nova.scheduler.client.report [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance b88f69cf-a706-408d-8dd0-9c891ac278df#033[00m
Nov 23 16:07:57 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:57 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118003ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.556 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.557 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.558 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.558 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.559 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.559 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.560 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.560 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.561 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.561 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.562 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.563 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.563 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.564 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.564 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.565 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.565 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.566 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.566 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.567 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.567 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.568 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.568 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.569 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.569 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.570 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.570 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.571 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.571 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.572 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-unplugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.572 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.573 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.573 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.574 230187 DEBUG oslo_concurrency.lockutils [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.574 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] No waiting events found dispatching network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.575 230187 WARNING nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received unexpected event network-vif-plugged-f23315bc-0f2d-4e45-91a2-0f72a8929b88 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.576 230187 DEBUG nova.compute.manager [req-6af55988-5743-4a94-958f-d76d539ee154 req-42983f60-4f5f-4d63-84aa-9adb5795a965 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Received event network-vif-deleted-f23315bc-0f2d-4e45-91a2-0f72a8929b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:07:57 np0005532762 nova_compute[230183]: 2025-11-23 21:07:57.581 230187 DEBUG oslo_concurrency.lockutils [None req-8a066405-df10-4c77-ab9f-1c006077ccf2 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "b88f69cf-a706-408d-8dd0-9c891ac278df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:07:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:58 np0005532762 nova_compute[230183]: 2025-11-23 21:07:58.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:58 np0005532762 nova_compute[230183]: 2025-11-23 21:07:58.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:07:58 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:58 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb120003b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:07:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:07:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:07:58.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.446 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.447 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.473 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.474 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:07:59 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:07:59 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb13c002e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:07:59 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:07:59 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2288679670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:07:59 np0005532762 nova_compute[230183]: 2025-11-23 21:07:59.943 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.098 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4933MB free_disk=59.94269561767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.196 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:08:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb118004010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 23 16:08:00 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha[233265]: 23/11/2025 21:08:00 : epoch 69237748 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb130003b20 fd 38 proxy ignored for local
Nov 23 16:08:00 np0005532762 kernel: ganesha.nfsd[234330]: segfault at 50 ip 00007fb1ed0b932e sp 00007fb1ae7fb210 error 4 in libntirpc.so.5.8[7fb1ed09e000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 23 16:08:00 np0005532762 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 23 16:08:00 np0005532762 systemd[1]: Started Process Core Dump (PID 234598/UID 0).
Nov 23 16:08:00 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:08:00 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2263582620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.644 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.649 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.675 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.703 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.703 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:00.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:00 np0005532762 nova_compute[230183]: 2025-11-23 21:08:00.805 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:01 np0005532762 systemd-coredump[234599]: Process 233269 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 57:#012#0  0x00007fb1ed0b932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.645 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:01 np0005532762 systemd[1]: systemd-coredump@12-234598-0.service: Deactivated successfully.
Nov 23 16:08:01 np0005532762 systemd[1]: systemd-coredump@12-234598-0.service: Consumed 1.100s CPU time.
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.684 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.696 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.696 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:01 np0005532762 podman[234608]: 2025-11-23 21:08:01.74067205 +0000 UTC m=+0.042361055 container died d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 23 16:08:01 np0005532762 systemd[1]: var-lib-containers-storage-overlay-24653c180c9318b519976f965307614ae6e36c0f21676083060c7a6287ff60f0-merged.mount: Deactivated successfully.
Nov 23 16:08:01 np0005532762 podman[234608]: 2025-11-23 21:08:01.777020548 +0000 UTC m=+0.078709563 container remove d5b74120fbf861ec21b580a080981227bcd9c52288af0a95ae65bbf92f739f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-nfs-cephfs-0-0-compute-1-fuxuha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:08:01 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Main process exited, code=exited, status=139/n/a
Nov 23 16:08:01 np0005532762 nova_compute[230183]: 2025-11-23 21:08:01.831 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:01 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:08:01 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.483s CPU time.
Nov 23 16:08:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:02 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210802 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:08:02 np0005532762 nova_compute[230183]: 2025-11-23 21:08:02.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:04.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:05 np0005532762 nova_compute[230183]: 2025-11-23 21:08:05.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:06 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/210806 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:08:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:06 np0005532762 nova_compute[230183]: 2025-11-23 21:08:06.864 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:08.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:08.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:10.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:10 np0005532762 nova_compute[230183]: 2025-11-23 21:08:10.763 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932075.7610657, b88f69cf-a706-408d-8dd0-9c891ac278df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:08:10 np0005532762 nova_compute[230183]: 2025-11-23 21:08:10.763 230187 INFO nova.compute.manager [-] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:08:10 np0005532762 nova_compute[230183]: 2025-11-23 21:08:10.779 230187 DEBUG nova.compute.manager [None req-a87ce78c-0b2c-4bd5-ae88-8c1b0b7ab7f8 - - - - - -] [instance: b88f69cf-a706-408d-8dd0-9c891ac278df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:08:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:10.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:10 np0005532762 nova_compute[230183]: 2025-11-23 21:08:10.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:11.350 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:11 np0005532762 nova_compute[230183]: 2025-11-23 21:08:11.350 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:11.351 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:08:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:11 np0005532762 nova_compute[230183]: 2025-11-23 21:08:11.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Scheduled restart job, restart counter is at 13.
Nov 23 16:08:12 np0005532762 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:08:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Consumed 1.483s CPU time.
Nov 23 16:08:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Start request repeated too quickly.
Nov 23 16:08:12 np0005532762 systemd[1]: ceph-03808be8-ae4a-5548-82e6-4a294f1bc627@nfs.cephfs.0.0.compute-1.fuxuha.service: Failed with result 'exit-code'.
Nov 23 16:08:12 np0005532762 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.fuxuha for 03808be8-ae4a-5548-82e6-4a294f1bc627.
Nov 23 16:08:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:12.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000055s ======
Nov 23 16:08:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 23 16:08:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:14 np0005532762 nova_compute[230183]: 2025-11-23 21:08:14.906 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:14 np0005532762 nova_compute[230183]: 2025-11-23 21:08:14.906 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:14 np0005532762 nova_compute[230183]: 2025-11-23 21:08:14.925 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.004 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.004 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.011 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.011 230187 INFO nova.compute.claims [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.095 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.524 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.531 230187 DEBUG nova.compute.provider_tree [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.553 230187 DEBUG nova.scheduler.client.report [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.580 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.581 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.663 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.663 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.714 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.734 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.819 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.820 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.821 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating image(s)#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.850 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.877 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.906 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.910 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.963 230187 DEBUG nova.policy [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.991 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.991 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.992 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:15 np0005532762 nova_compute[230183]: 2025-11-23 21:08:15.992 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.016 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.019 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.279 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.339 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.444 230187 DEBUG nova.objects.instance [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.460 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.461 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Ensure instance console log exists: /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.461 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.462 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.462 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:16 np0005532762 podman[234872]: 2025-11-23 21:08:16.660935032 +0000 UTC m=+0.065664862 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 16:08:16 np0005532762 podman[234871]: 2025-11-23 21:08:16.720914684 +0000 UTC m=+0.126659302 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:08:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:16 np0005532762 nova_compute[230183]: 2025-11-23 21:08:16.910 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:17 np0005532762 nova_compute[230183]: 2025-11-23 21:08:17.086 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully created port: 932faebb-b274-4e17-94a9-9339a27c275f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.183 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully updated port: 932faebb-b274-4e17-94a9-9339a27c275f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:08:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.195 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.307 230187 DEBUG nova.compute.manager [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.307 230187 DEBUG nova.compute.manager [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.308 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:18 np0005532762 nova_compute[230183]: 2025-11-23 21:08:18.379 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:08:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.380 230187 DEBUG nova.network.neutron [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance network_info: |[{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.395 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.396 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.398 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start _get_guest_xml network_info=[{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.401 230187 WARNING nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.405 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.405 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.410 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.libvirt.host [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.411 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.412 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.413 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.414 230187 DEBUG nova.virt.hardware [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.416 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:08:19 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3318702537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.867 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.895 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:19 np0005532762 nova_compute[230183]: 2025-11-23 21:08:19.901 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:20 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:08:20 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1702176561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.350 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.352 230187 DEBUG nova.virt.libvirt.vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:08:15Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.352 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.353 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:20 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:20.353 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.354 230187 DEBUG nova.objects.instance [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.370 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <name>instance-00000003</name>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:08:19</nova:creationTime>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="serial">451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="uuid">451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:22:80:b0"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <target dev="tap932faebb-b2"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log" append="off"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:08:20 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:08:20 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:08:20 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:08:20 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.371 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Preparing to wait for external event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.371 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.372 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.372 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.virt.libvirt.vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:08:15Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.373 230187 DEBUG nova.network.os_vif_util [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.374 230187 DEBUG os_vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.374 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.375 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.375 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.377 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.378 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap932faebb-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.378 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap932faebb-b2, col_values=(('external_ids', {'iface-id': '932faebb-b274-4e17-94a9-9339a27c275f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:80:b0', 'vm-uuid': '451aa9f7-4cd0-413e-beed-8a30a8685ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.379 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:20 np0005532762 NetworkManager[49021]: <info>  [1763932100.3810] manager: (tap932faebb-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.384 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.388 230187 INFO os_vif [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2')#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.426 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.427 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.427 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:22:80:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.428 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Using config drive#033[00m
Nov 23 16:08:20 np0005532762 nova_compute[230183]: 2025-11-23 21:08:20.452 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:20 np0005532762 podman[235000]: 2025-11-23 21:08:20.656026704 +0000 UTC m=+0.075324860 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:08:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:20.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.113 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Creating config drive at /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.123 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou5qgsff execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.250 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpou5qgsff" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.281 230187 DEBUG nova.storage.rbd_utils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.285 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.322 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.323 230187 DEBUG nova.network.neutron [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.338 230187 DEBUG oslo_concurrency.lockutils [req-3a227a43-659c-4147-80cc-c43e34a28cae req-3bcd89bf-e31a-4ee8-96b0-e64ba34047c9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.452 230187 DEBUG oslo_concurrency.processutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config 451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.453 230187 INFO nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deleting local config drive /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/disk.config because it was imported into RBD.#033[00m
Nov 23 16:08:21 np0005532762 kernel: tap932faebb-b2: entered promiscuous mode
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.5037] manager: (tap932faebb-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 23 16:08:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:21Z|00045|binding|INFO|Claiming lport 932faebb-b274-4e17-94a9-9339a27c275f for this chassis.
Nov 23 16:08:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:21Z|00046|binding|INFO|932faebb-b274-4e17-94a9-9339a27c275f: Claiming fa:16:3e:22:80:b0 10.100.0.5
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.519 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:80:b0 10.100.0.5'], port_security=['fa:16:3e:22:80:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfca448-ff51-45d5-9a96-e7d306414608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b3669a8c-2edc-4975-aec5-618de39b846f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab9ca556-3834-43fe-9280-f86716cb1ac8, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=932faebb-b274-4e17-94a9-9339a27c275f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.520 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 932faebb-b274-4e17-94a9-9339a27c275f in datapath 0cfca448-ff51-45d5-9a96-e7d306414608 bound to our chassis#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.521 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cfca448-ff51-45d5-9a96-e7d306414608#033[00m
Nov 23 16:08:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.531 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1aada236-2f5c-4072-a960-4ba4fc5c95bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.532 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cfca448-f1 in ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:08:21 np0005532762 systemd-machined[193469]: New machine qemu-2-instance-00000003.
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.533 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cfca448-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.533 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f753328a-1356-4b8d-be59-abcbf8a31015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.534 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf145dad-e551-4a8a-bf0e-309dd0b0e11c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.544 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1326c1c8-a04f-4a52-aa3f-34aee9eca868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.568 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e7044e-3925-4910-851a-1932692313b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 systemd-udevd[235076]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.5919] device (tap932faebb-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.5931] device (tap932faebb-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.601 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b88f7ce-2346-4291-8f7c-6ec7034382f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.6080] manager: (tap0cfca448-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.606 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaa6f28-2cdc-4d95-99b9-2aff9763fbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:21Z|00047|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f ovn-installed in OVS
Nov 23 16:08:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:21Z|00048|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f up in Southbound
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.614 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.638 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[772865ed-72e2-46a2-b30e-471b2ad1f263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.641 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c24cd3ef-c57d-476b-9e55-a67705b0b485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.6577] device (tap0cfca448-f0): carrier: link connected
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.662 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a05331e-5f02-4185-831d-ff3038b5024b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.680 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee19be-d932-4cbe-a96c-538dde2fa697]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfca448-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:a5:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405023, 'reachable_time': 41170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235106, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.693 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b71050-b303-4443-84ad-07985cdb37ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:a57d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405023, 'tstamp': 405023}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235107, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.709 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[48ed6155-cf74-4b17-a0ab-fd5196ce9d70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfca448-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:a5:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405023, 'reachable_time': 41170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235108, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.741 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5f08d759-07db-4404-a7c7-b6dcdd9bf68f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.808 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c7abd8d6-d662-41b8-9636-4b78a8bcd865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.809 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfca448-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.809 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.810 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cfca448-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.811 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:21 np0005532762 kernel: tap0cfca448-f0: entered promiscuous mode
Nov 23 16:08:21 np0005532762 NetworkManager[49021]: <info>  [1763932101.8131] manager: (tap0cfca448-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.816 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cfca448-f0, col_values=(('external_ids', {'iface-id': '54600d4f-e167-4eaf-830f-ddc1c402909e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:21Z|00049|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.818 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.819 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.819 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb51c47-c49a-47f8-935a-e47623bdd58b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.820 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-0cfca448-ff51-45d5-9a96-e7d306414608
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/0cfca448-ff51-45d5-9a96-e7d306414608.pid.haproxy
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 0cfca448-ff51-45d5-9a96-e7d306414608
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:08:21 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:21.821 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'env', 'PROCESS_TAG=haproxy-0cfca448-ff51-45d5-9a96-e7d306414608', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cfca448-ff51-45d5-9a96-e7d306414608.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.830 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:21 np0005532762 nova_compute[230183]: 2025-11-23 21:08:21.912 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.179 230187 DEBUG nova.compute.manager [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG oslo_concurrency.lockutils [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.180 230187 DEBUG nova.compute.manager [req-dfe63e0f-b9c9-4d03-8851-1d9f4773f2d5 req-cf274ec6-893c-49dc-8482-2ab7f19b2e86 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Processing event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:08:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:22.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:22 np0005532762 podman[235140]: 2025-11-23 21:08:22.225017778 +0000 UTC m=+0.068466418 container create 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 16:08:22 np0005532762 systemd[1]: Started libpod-conmon-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope.
Nov 23 16:08:22 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:08:22 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e59bb26b82ad07b4bc95bd3eabbfae128162a27036a9012db8ac3aeadc048e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:22 np0005532762 podman[235140]: 2025-11-23 21:08:22.195458119 +0000 UTC m=+0.038906799 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:08:22 np0005532762 podman[235140]: 2025-11-23 21:08:22.293110807 +0000 UTC m=+0.136559437 container init 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:08:22 np0005532762 podman[235140]: 2025-11-23 21:08:22.297965012 +0000 UTC m=+0.141413642 container start 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:08:22 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : New worker (235202) forked
Nov 23 16:08:22 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : Loading success.
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.347 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.3467357, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.347 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Started (Lifecycle Event)#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.349 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.352 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.355 230187 INFO nova.virt.libvirt.driver [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance spawned successfully.#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.356 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.369 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.375 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.380 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.380 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.381 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.381 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.382 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.382 230187 DEBUG nova.virt.libvirt.driver [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.389 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.389 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.346841, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.390 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.409 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.413 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932102.3515668, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.413 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.443 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.445 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.478 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.498 230187 INFO nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 6.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.499 230187 DEBUG nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.568 230187 INFO nova.compute.manager [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 7.59 seconds to build instance.#033[00m
Nov 23 16:08:22 np0005532762 nova_compute[230183]: 2025-11-23 21:08:22.587 230187 DEBUG oslo_concurrency.lockutils [None req-8271d9c0-edad-4ef8-966e-d27b6e0dc1ed 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:22.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:24.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.244 230187 DEBUG nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.245 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.245 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 DEBUG oslo_concurrency.lockutils [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 DEBUG nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:08:24 np0005532762 nova_compute[230183]: 2025-11-23 21:08:24.246 230187 WARNING nova.compute.manager [req-dd4752ca-a477-48e6-ab04-868e85bc27f2 req-b91def5d-6189-407f-b1fe-57d5cfc2c079 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with vm_state active and task_state None.#033[00m
Nov 23 16:08:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:24.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:25 np0005532762 nova_compute[230183]: 2025-11-23 21:08:25.381 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:25 np0005532762 NetworkManager[49021]: <info>  [1763932105.6349] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 23 16:08:25 np0005532762 NetworkManager[49021]: <info>  [1763932105.6361] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 23 16:08:25 np0005532762 nova_compute[230183]: 2025-11-23 21:08:25.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:25 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:25Z|00050|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 16:08:25 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:25Z|00051|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 16:08:25 np0005532762 nova_compute[230183]: 2025-11-23 21:08:25.684 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:25 np0005532762 nova_compute[230183]: 2025-11-23 21:08:25.688 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.232 230187 DEBUG nova.compute.manager [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG nova.compute.manager [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.233 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:08:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:08:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:08:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:26.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:26 np0005532762 nova_compute[230183]: 2025-11-23 21:08:26.938 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:28.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:29 np0005532762 nova_compute[230183]: 2025-11-23 21:08:29.542 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:08:29 np0005532762 nova_compute[230183]: 2025-11-23 21:08:29.543 230187 DEBUG nova.network.neutron [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:29 np0005532762 nova_compute[230183]: 2025-11-23 21:08:29.565 230187 DEBUG oslo_concurrency.lockutils [req-150712fc-1986-49b8-a9cd-6ffa4a9b7be7 req-19bdc11d-cd0e-42de-be75-054d2493d33a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:30.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:30 np0005532762 nova_compute[230183]: 2025-11-23 21:08:30.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:08:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:31 np0005532762 nova_compute[230183]: 2025-11-23 21:08:31.915 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:32.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:34.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:35 np0005532762 nova_compute[230183]: 2025-11-23 21:08:35.391 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:80:b0 10.100.0.5
Nov 23 16:08:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:80:b0 10.100.0.5
Nov 23 16:08:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:36.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:36 np0005532762 nova_compute[230183]: 2025-11-23 21:08:36.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:38.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:40 np0005532762 nova_compute[230183]: 2025-11-23 21:08:40.439 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:41 np0005532762 nova_compute[230183]: 2025-11-23 21:08:41.965 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:42.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:42 np0005532762 nova_compute[230183]: 2025-11-23 21:08:42.798 230187 INFO nova.compute.manager [None req-f6e381c6-246a-4963-ac73-71e7bb9aa240 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Get console output#033[00m
Nov 23 16:08:42 np0005532762 nova_compute[230183]: 2025-11-23 21:08:42.803 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:08:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:45 np0005532762 nova_compute[230183]: 2025-11-23 21:08:45.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:46.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:46 np0005532762 nova_compute[230183]: 2025-11-23 21:08:46.967 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:47 np0005532762 podman[235383]: 2025-11-23 21:08:47.64484625 +0000 UTC m=+0.052706683 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 23 16:08:47 np0005532762 podman[235382]: 2025-11-23 21:08:47.679679974 +0000 UTC m=+0.089824211 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:08:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:48.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:48.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:49 np0005532762 nova_compute[230183]: 2025-11-23 21:08:49.116 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:49 np0005532762 nova_compute[230183]: 2025-11-23 21:08:49.117 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:49 np0005532762 nova_compute[230183]: 2025-11-23 21:08:49.117 230187 DEBUG nova.objects.instance [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:50 np0005532762 nova_compute[230183]: 2025-11-23 21:08:50.011 230187 DEBUG nova.objects.instance [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:50 np0005532762 nova_compute[230183]: 2025-11-23 21:08:50.025 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:08:50 np0005532762 nova_compute[230183]: 2025-11-23 21:08:50.211 230187 DEBUG nova.policy [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:08:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:50 np0005532762 nova_compute[230183]: 2025-11-23 21:08:50.515 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:50.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.046 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully created port: c1f5466b-7cb0-4db1-aacf-c88bf808a51a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:08:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:51 np0005532762 podman[235428]: 2025-11-23 21:08:51.656655395 +0000 UTC m=+0.060580360 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.959 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Successfully updated port: c1f5466b-7cb0-4db1-aacf-c88bf808a51a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.971 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.982 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.983 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:51 np0005532762 nova_compute[230183]: 2025-11-23 21:08:51.983 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:08:52 np0005532762 nova_compute[230183]: 2025-11-23 21:08:52.076 230187 DEBUG nova.compute.manager [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:52 np0005532762 nova_compute[230183]: 2025-11-23 21:08:52.076 230187 DEBUG nova.compute.manager [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-c1f5466b-7cb0-4db1-aacf-c88bf808a51a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:08:52 np0005532762 nova_compute[230183]: 2025-11-23 21:08:52.077 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.043 230187 DEBUG nova.network.neutron [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.063 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.065 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.065 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port c1f5466b-7cb0-4db1-aacf-c88bf808a51a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.069 230187 DEBUG nova.virt.libvirt.vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.070 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.071 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.072 230187 DEBUG os_vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.073 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.073 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.074 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.079 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.079 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f5466b-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.080 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1f5466b-7c, col_values=(('external_ids', {'iface-id': 'c1f5466b-7cb0-4db1-aacf-c88bf808a51a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:5e:db', 'vm-uuid': '451aa9f7-4cd0-413e-beed-8a30a8685ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.081 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.0821] manager: (tapc1f5466b-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.088 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.090 230187 INFO os_vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.091 230187 DEBUG nova.virt.libvirt.vif [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.091 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.092 230187 DEBUG nova.network.os_vif_util [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.095 230187 DEBUG nova.virt.libvirt.guest [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] attach device xml: <interface type="ethernet">
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <target dev="tapc1f5466b-7c"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:08:54 np0005532762 nova_compute[230183]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 23 16:08:54 np0005532762 kernel: tapc1f5466b-7c: entered promiscuous mode
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.1104] manager: (tapc1f5466b-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.111 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:54Z|00052|binding|INFO|Claiming lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a for this chassis.
Nov 23 16:08:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:54Z|00053|binding|INFO|c1f5466b-7cb0-4db1-aacf-c88bf808a51a: Claiming fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.127 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:5e:db 10.100.0.25'], port_security=['fa:16:3e:c6:5e:db 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a70406db-79d7-4319-98a3-b89293d6f5cb, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=c1f5466b-7cb0-4db1-aacf-c88bf808a51a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.129 142158 INFO neutron.agent.ovn.metadata.agent [-] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a in datapath c71c794f-3bb9-41ea-bd53-fb4d0511d891 bound to our chassis#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.130 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c71c794f-3bb9-41ea-bd53-fb4d0511d891#033[00m
Nov 23 16:08:54 np0005532762 systemd-udevd[235459]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.140 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ef10b6-3a50-4fac-bc23-802432c274b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.141 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc71c794f-31 in ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.142 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc71c794f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.143 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8a5881-9428-4b7d-bb6a-a978f70c70d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.143 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f9aa7-8518-4ba6-a647-febbf4c29161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.1547] device (tapc1f5466b-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.155 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea66074-dafa-4b9a-af36-392134f8a2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.159 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.1601] device (tapc1f5466b-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:08:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:54Z|00054|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a ovn-installed in OVS
Nov 23 16:08:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:54Z|00055|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a up in Southbound
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.164 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.169 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f5ba08-68ed-4471-acac-472a5299af3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.195 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[afad81f5-28ec-4c7b-9ae8-d0ef637266ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.2028] manager: (tapc71c794f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.203 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d204e0ab-e54f-41b9-8103-c0eaab5ea1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.206 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.206 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.207 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:22:80:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.207 230187 DEBUG nova.virt.libvirt.driver [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:c6:5e:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.229 230187 DEBUG nova.virt.libvirt.guest [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:54 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 16:08:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:54 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:54 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:54 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:54 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:08:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:54.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.233 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d089dfaa-3d1b-4eea-97fe-e79bffdbed92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.238 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d9895fc2-9a95-4d89-bb52-003e8fdeed23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.250 230187 DEBUG oslo_concurrency.lockutils [None req-53189d97-abe4-4d87-8752-b2ef93a334aa 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.2596] device (tapc71c794f-30): carrier: link connected
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.264 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d9da9f-a416-444c-ac87-8e7d46007bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.280 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fde95b44-8629-4b2f-9e6f-b60371572a19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71c794f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:c2:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408283, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235485, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.295 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9adee06c-b191-4721-a6cd-c163f3159624]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:c2a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408283, 'tstamp': 408283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235486, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.310 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4b270c3e-2cf7-4bf6-a9fc-5876587f9e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc71c794f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:c2:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408283, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235487, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.339 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8c558db2-7eae-447f-a332-93dc7d4fe1e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.394 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc10d943-52ce-4751-bd9b-55f6d348e48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.395 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71c794f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.396 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.396 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc71c794f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.396 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 DEBUG oslo_concurrency.lockutils [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 DEBUG nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.397 230187 WARNING nova.compute.manager [req-c75b1380-da47-415c-a736-26a94d8ce267 req-5d88ab42-966f-4fb1-a9cf-cbce3d979ed0 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.398 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 NetworkManager[49021]: <info>  [1763932134.3990] manager: (tapc71c794f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 23 16:08:54 np0005532762 kernel: tapc71c794f-30: entered promiscuous mode
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.403 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc71c794f-30, col_values=(('external_ids', {'iface-id': '5df25d22-b106-405a-b6b2-c3bf4fd41e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:54Z|00056|binding|INFO|Releasing lport 5df25d22-b106-405a-b6b2-c3bf4fd41e45 from this chassis (sb_readonly=0)
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.404 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.405 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.407 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.408 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[57c90d2f-db35-4b63-b0e2-91e70754a46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.408 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-c71c794f-3bb9-41ea-bd53-fb4d0511d891
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/c71c794f-3bb9-41ea-bd53-fb4d0511d891.pid.haproxy
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID c71c794f-3bb9-41ea-bd53-fb4d0511d891
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:08:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:54.409 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'env', 'PROCESS_TAG=haproxy-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c71c794f-3bb9-41ea-bd53-fb4d0511d891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:08:54 np0005532762 nova_compute[230183]: 2025-11-23 21:08:54.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:54 np0005532762 podman[235519]: 2025-11-23 21:08:54.781738149 +0000 UTC m=+0.058072741 container create dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:08:54 np0005532762 systemd[1]: Started libpod-conmon-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope.
Nov 23 16:08:54 np0005532762 podman[235519]: 2025-11-23 21:08:54.753467875 +0000 UTC m=+0.029802507 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:08:54 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:08:54 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09efd6aef4de0b9fee32aa5a46b9f13dd619c92f7667aa4d49a98b02a6bce3c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:08:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:54 np0005532762 podman[235519]: 2025-11-23 21:08:54.876024732 +0000 UTC m=+0.152359364 container init dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 16:08:54 np0005532762 podman[235519]: 2025-11-23 21:08:54.883380066 +0000 UTC m=+0.159714668 container start dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:08:54 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : New worker (235540) forked
Nov 23 16:08:54 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : Loading success.
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.246 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port c1f5466b-7cb0-4db1-aacf-c88bf808a51a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.247 230187 DEBUG nova.network.neutron [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.260 230187 DEBUG oslo_concurrency.lockutils [req-85b57734-87bd-48c8-b714-7bc6086cf1e7 req-4ed76ce1-8c6c-4044-8dc1-ffe495992e34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.629 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.629 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.649 230187 DEBUG nova.objects.instance [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.664 230187 DEBUG nova.virt.libvirt.vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.664 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.665 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.669 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 16:08:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:5e:db 10.100.0.25
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.672 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.674 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Attempting to detach device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.675 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <target dev="tapc1f5466b-7c"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.679 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <name>instance-00000003</name>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:22:80:b0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='tap932faebb-b2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:c6:5e:db'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='tapc1f5466b-7c'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='net1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 INFO nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the persistent domain config.#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.683 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] (1/8): Attempting to detach device tapc1f5466b-7c with device alias net1 from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.684 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:c6:5e:db"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <target dev="tapc1f5466b-7c"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 16:08:55 np0005532762 kernel: tapc1f5466b-7c (unregistering): left promiscuous mode
Nov 23 16:08:55 np0005532762 NetworkManager[49021]: <info>  [1763932135.7803] device (tapc1f5466b-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:08:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:55Z|00057|binding|INFO|Releasing lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a from this chassis (sb_readonly=0)
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.786 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:55Z|00058|binding|INFO|Setting lport c1f5466b-7cb0-4db1-aacf-c88bf808a51a down in Southbound
Nov 23 16:08:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:55Z|00059|binding|INFO|Removing iface tapc1f5466b-7c ovn-installed in OVS
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.790 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Received event <DeviceRemovedEvent: 1763932135.7897394, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.792 230187 DEBUG nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Start waiting for the detach event from libvirt for device tapc1f5466b-7c with device alias net1 for instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.792 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.795 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:5e:db 10.100.0.25'], port_security=['fa:16:3e:c6:5e:db 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a70406db-79d7-4319-98a3-b89293d6f5cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=c1f5466b-7cb0-4db1-aacf-c88bf808a51a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.796 142158 INFO neutron.agent.ovn.metadata.agent [-] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a in datapath c71c794f-3bb9-41ea-bd53-fb4d0511d891 unbound from our chassis#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.796 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <name>instance-00000003</name>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:54</nova:creationTime>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:port uuid="c1f5466b-7cb0-4db1-aacf-c88bf808a51a">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:08:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.797 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c71c794f-3bb9-41ea-bd53-fb4d0511d891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:22:80:b0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target dev='tap932faebb-b2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.797 230187 INFO nova.virt.libvirt.driver [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tapc1f5466b-7c from instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 from the live domain config.#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.798 230187 DEBUG nova.virt.libvirt.vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.798 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.799 230187 DEBUG nova.network.os_vif_util [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.799 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e9454103-e961-4851-adc4-bef1e4809888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:55.800 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 namespace which is not needed anymore#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.800 230187 DEBUG os_vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.802 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.802 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f5466b-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.804 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.806 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.811 230187 INFO os_vif [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')#033[00m
Nov 23 16:08:55 np0005532762 nova_compute[230183]: 2025-11-23 21:08:55.812 230187 DEBUG nova.virt.libvirt.guest [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:55 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:55 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:55 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : haproxy version is 2.8.14-c23fe91
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [NOTICE]   (235538) : path to executable is /usr/sbin/haproxy
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : Exiting Master process...
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : Exiting Master process...
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [ALERT]    (235538) : Current worker (235540) exited with code 143 (Terminated)
Nov 23 16:08:55 np0005532762 neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891[235534]: [WARNING]  (235538) : All workers exited. Exiting... (0)
Nov 23 16:08:55 np0005532762 systemd[1]: libpod-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope: Deactivated successfully.
Nov 23 16:08:55 np0005532762 podman[235572]: 2025-11-23 21:08:55.962541183 +0000 UTC m=+0.050839230 container died dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:08:55 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507-userdata-shm.mount: Deactivated successfully.
Nov 23 16:08:55 np0005532762 systemd[1]: var-lib-containers-storage-overlay-09efd6aef4de0b9fee32aa5a46b9f13dd619c92f7667aa4d49a98b02a6bce3c0-merged.mount: Deactivated successfully.
Nov 23 16:08:56 np0005532762 podman[235572]: 2025-11-23 21:08:56.001243136 +0000 UTC m=+0.089541183 container cleanup dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 16:08:56 np0005532762 systemd[1]: libpod-conmon-dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507.scope: Deactivated successfully.
Nov 23 16:08:56 np0005532762 podman[235601]: 2025-11-23 21:08:56.062928496 +0000 UTC m=+0.040223156 container remove dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.068 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f805d049-cf88-4036-9c04-b334cf0b713a]: (4, ('Sun Nov 23 09:08:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 (dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507)\ndea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507\nSun Nov 23 09:08:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 (dea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507)\ndea21853cb7b42b70acf76448e66fa754e888f661bc0674eb93d4f2191bca507\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.069 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a31dba2d-7561-433b-a307-d9ce2334e551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.070 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc71c794f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.073 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:56 np0005532762 kernel: tapc71c794f-30: left promiscuous mode
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.086 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.088 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6c1756-377c-4b54-9dc8-56f5a1380d93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.102 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6a3648-ce81-4b46-acfa-7121191e366b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.103 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0ef576-0fc5-42ca-b894-416fbfb72bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.116 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e13bf8a9-2af1-456f-b86e-6675d594aa54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408276, 'reachable_time': 33556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235617, 'error': None, 'target': 'ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 systemd[1]: run-netns-ovnmeta\x2dc71c794f\x2d3bb9\x2d41ea\x2dbd53\x2dfb4d0511d891.mount: Deactivated successfully.
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.119 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c71c794f-3bb9-41ea-bd53-fb4d0511d891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:08:56 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:56.119 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[3e28f93e-1097-4733-ad86-c7d37cfc9a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:08:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:56.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.363 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.364 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.364 230187 DEBUG nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.428 230187 DEBUG nova.compute.manager [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-deleted-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.429 230187 INFO nova.compute.manager [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Neutron deleted interface c1f5466b-7cb0-4db1-aacf-c88bf808a51a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.429 230187 DEBUG nova.network.neutron [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.461 230187 DEBUG nova.objects.instance [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'system_metadata' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.501 230187 DEBUG nova.objects.instance [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'flavor' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.528 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.529 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.530 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.530 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.531 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.531 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.532 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.533 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.533 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.534 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.534 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.535 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-unplugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.535 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.536 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.536 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 DEBUG oslo_concurrency.lockutils [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 DEBUG nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.537 230187 WARNING nova.compute.manager [req-594dbab1-a6cc-498b-99a4-36347f725d29 req-c2badbcc-ae46-4b50-a19c-ef164fe5d5b3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-c1f5466b-7cb0-4db1-aacf-c88bf808a51a for instance with vm_state active and task_state None.#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.540 230187 DEBUG nova.virt.libvirt.vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.541 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.542 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.545 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.551 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <name>instance-00000003</name>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:22:80:b0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='tap932faebb-b2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.553 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.556 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c6:5e:db"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc1f5466b-7c"/></interface>not found in domain: <domain type='kvm' id='2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <name>instance-00000003</name>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <uuid>451aa9f7-4cd0-413e-beed-8a30a8685ff1</uuid>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:55</nova:creationTime>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='serial'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='uuid'>451aa9f7-4cd0-413e-beed-8a30a8685ff1</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk' index='2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/451aa9f7-4cd0-413e-beed-8a30a8685ff1_disk.config' index='1'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:22:80:b0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target dev='tap932faebb-b2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1/console.log' append='off'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c591,c609</label>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c591,c609</imagelabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.556 230187 WARNING nova.virt.libvirt.driver [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Detaching interface fa:16:3e:c6:5e:db failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapc1f5466b-7c' not found.#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.557 230187 DEBUG nova.virt.libvirt.vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.558 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "address": "fa:16:3e:c6:5e:db", "network": {"id": "c71c794f-3bb9-41ea-bd53-fb4d0511d891", "bridge": "br-int", "label": "tempest-network-smoke--1634889975", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1f5466b-7c", "ovs_interfaceid": "c1f5466b-7cb0-4db1-aacf-c88bf808a51a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.558 230187 DEBUG nova.network.os_vif_util [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.559 230187 DEBUG os_vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.563 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.563 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f5466b-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.564 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.566 230187 INFO os_vif [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:5e:db,bridge_name='br-int',has_traffic_filtering=True,id=c1f5466b-7cb0-4db1-aacf-c88bf808a51a,network=Network(c71c794f-3bb9-41ea-bd53-fb4d0511d891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1f5466b-7c')#033[00m
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.567 230187 DEBUG nova.virt.libvirt.guest [req-0f4793ce-30ee-456e-8738-5d5253afa33f req-3ca5f83d-d6d5-46e9-8c01-5772a0f0ff6a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-163368612</nova:name>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:08:56</nova:creationTime>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    <nova:port uuid="932faebb-b274-4e17-94a9-9339a27c275f">
Nov 23 16:08:56 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:08:56 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:08:56 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:08:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:56 np0005532762 nova_compute[230183]: 2025-11-23 21:08:56.973 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.428 230187 INFO nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Port c1f5466b-7cb0-4db1-aacf-c88bf808a51a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.429 230187 DEBUG nova.network.neutron [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.446 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.461 230187 DEBUG oslo_concurrency.lockutils [None req-15d88cfe-d77b-4e79-b7d3-3a1616bf9174 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-451aa9f7-4cd0-413e-beed-8a30a8685ff1-c1f5466b-7cb0-4db1-aacf-c88bf808a51a" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:57 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:57Z|00060|binding|INFO|Releasing lport 54600d4f-e167-4eaf-830f-ddc1c402909e from this chassis (sb_readonly=0)
Nov 23 16:08:57 np0005532762 nova_compute[230183]: 2025-11-23 21:08:57.592 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:08:58.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:08:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:08:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:08:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.289 230187 DEBUG nova.compute.manager [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-changed-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.289 230187 DEBUG nova.compute.manager [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing instance network info cache due to event network-changed-932faebb-b274-4e17-94a9-9339a27c275f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.290 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Refreshing network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.365 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.366 230187 INFO nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Terminating instance#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.367 230187 DEBUG nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:08:59 np0005532762 kernel: tap932faebb-b2 (unregistering): left promiscuous mode
Nov 23 16:08:59 np0005532762 NetworkManager[49021]: <info>  [1763932139.4196] device (tap932faebb-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.427 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:59Z|00061|binding|INFO|Releasing lport 932faebb-b274-4e17-94a9-9339a27c275f from this chassis (sb_readonly=0)
Nov 23 16:08:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:59Z|00062|binding|INFO|Setting lport 932faebb-b274-4e17-94a9-9339a27c275f down in Southbound
Nov 23 16:08:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:08:59Z|00063|binding|INFO|Removing iface tap932faebb-b2 ovn-installed in OVS
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.429 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.436 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:80:b0 10.100.0.5'], port_security=['fa:16:3e:22:80:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '451aa9f7-4cd0-413e-beed-8a30a8685ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfca448-ff51-45d5-9a96-e7d306414608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b3669a8c-2edc-4975-aec5-618de39b846f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab9ca556-3834-43fe-9280-f86716cb1ac8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=932faebb-b274-4e17-94a9-9339a27c275f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.437 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 932faebb-b274-4e17-94a9-9339a27c275f in datapath 0cfca448-ff51-45d5-9a96-e7d306414608 unbound from our chassis#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.438 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cfca448-ff51-45d5-9a96-e7d306414608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.438 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b47a4487-d14b-4baf-ae7d-6cd32624508e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.439 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 namespace which is not needed anymore#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.447 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 23 16:08:59 np0005532762 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.764s CPU time.
Nov 23 16:08:59 np0005532762 systemd-machined[193469]: Machine qemu-2-instance-00000003 terminated.
Nov 23 16:08:59 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : haproxy version is 2.8.14-c23fe91
Nov 23 16:08:59 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [NOTICE]   (235200) : path to executable is /usr/sbin/haproxy
Nov 23 16:08:59 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [WARNING]  (235200) : Exiting Master process...
Nov 23 16:08:59 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [ALERT]    (235200) : Current worker (235202) exited with code 143 (Terminated)
Nov 23 16:08:59 np0005532762 neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608[235192]: [WARNING]  (235200) : All workers exited. Exiting... (0)
Nov 23 16:08:59 np0005532762 systemd[1]: libpod-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope: Deactivated successfully.
Nov 23 16:08:59 np0005532762 podman[235643]: 2025-11-23 21:08:59.560847145 +0000 UTC m=+0.042098148 container died 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:08:59 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa-userdata-shm.mount: Deactivated successfully.
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.594 230187 INFO nova.virt.libvirt.driver [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Instance destroyed successfully.#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.594 230187 DEBUG nova.objects.instance [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 451aa9f7-4cd0-413e-beed-8a30a8685ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:08:59 np0005532762 systemd[1]: var-lib-containers-storage-overlay-8e59bb26b82ad07b4bc95bd3eabbfae128162a27036a9012db8ac3aeadc048e2-merged.mount: Deactivated successfully.
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.605 230187 DEBUG nova.virt.libvirt.vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:08:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-163368612',display_name='tempest-TestNetworkBasicOps-server-163368612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-163368612',id=3,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6ZIj438fQUpVfUUeh9lapkxwknyZNU4rtkhiTUYmBPGvkJZXNdDf4srslhWKNNtoBf1C2D4cd/jBUBjs52xRw75wPIQzFCZ8VrPBNO0yEc0UePTukzbeBIVnoSLQIebA==',key_name='tempest-TestNetworkBasicOps-1883192829',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:08:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-ptm322on',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:08:22Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=451aa9f7-4cd0-413e-beed-8a30a8685ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.606 230187 DEBUG nova.network.os_vif_util [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.606 230187 DEBUG nova.network.os_vif_util [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.607 230187 DEBUG os_vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:08:59 np0005532762 podman[235643]: 2025-11-23 21:08:59.609750211 +0000 UTC m=+0.091001214 container cleanup 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.610 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.610 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932faebb-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:59 np0005532762 systemd[1]: libpod-conmon-4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa.scope: Deactivated successfully.
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.664 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.668 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.669 230187 INFO os_vif [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:80:b0,bridge_name='br-int',has_traffic_filtering=True,id=932faebb-b274-4e17-94a9-9339a27c275f,network=Network(0cfca448-ff51-45d5-9a96-e7d306414608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap932faebb-b2')#033[00m
Nov 23 16:08:59 np0005532762 podman[235683]: 2025-11-23 21:08:59.671163044 +0000 UTC m=+0.041697468 container remove 4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.676 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[247eca96-e605-4bea-a3b6-4eb268dba296]: (4, ('Sun Nov 23 09:08:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 (4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa)\n4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa\nSun Nov 23 09:08:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 (4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa)\n4d6790caaaf3d0762e0973c0e27b136fb698887c845f3709538675eb279e1ffa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.677 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2429a2bf-d1f1-4fe9-bd2d-b3252115b5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.678 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfca448-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:08:59 np0005532762 kernel: tap0cfca448-f0: left promiscuous mode
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.690 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 nova_compute[230183]: 2025-11-23 21:08:59.694 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.697 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a77340d2-1f9a-4867-9195-f292dd777371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.714 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[446b92a5-1371-4b58-8799-8f31f011b8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.715 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9d57ae47-2b5c-4b94-8a9d-7a9e0cd07a34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.737 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1eadfa44-e541-462e-bcb8-3198810983f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405016, 'reachable_time': 33983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235714, 'error': None, 'target': 'ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.739 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cfca448-ff51-45d5-9a96-e7d306414608 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:08:59 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:08:59.739 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdd37ed-a185-43cd-902d-9302709e483c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:08:59 np0005532762 systemd[1]: run-netns-ovnmeta\x2d0cfca448\x2dff51\x2d45d5\x2d9a96\x2de7d306414608.mount: Deactivated successfully.
Nov 23 16:09:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:00.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.296 230187 INFO nova.virt.libvirt.driver [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deleting instance files /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1_del#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.297 230187 INFO nova.virt.libvirt.driver [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deletion of /var/lib/nova/instances/451aa9f7-4cd0-413e-beed-8a30a8685ff1_del complete#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.344 230187 INFO nova.compute.manager [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.345 230187 DEBUG oslo.service.loopingcall [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.346 230187 DEBUG nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.346 230187 DEBUG nova.network.neutron [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:00 np0005532762 nova_compute[230183]: 2025-11-23 21:09:00.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:09:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.371 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-unplugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.372 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG oslo_concurrency.lockutils [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 DEBUG nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] No waiting events found dispatching network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.373 230187 WARNING nova.compute.manager [req-f5e5a756-9330-46b4-bf23-d6e4772e9394 req-00d8f286-fcfd-4713-98c9-2fa24d696cd3 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received unexpected event network-vif-plugged-932faebb-b274-4e17-94a9-9339a27c275f for instance with vm_state active and task_state deleting.#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.463 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.463 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.464 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:01 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1716591767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.941 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:01 np0005532762 nova_compute[230183]: 2025-11-23 21:09:01.975 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.040 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updated VIF entry in instance network info cache for port 932faebb-b274-4e17-94a9-9339a27c275f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.040 230187 DEBUG nova.network.neutron [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [{"id": "932faebb-b274-4e17-94a9-9339a27c275f", "address": "fa:16:3e:22:80:b0", "network": {"id": "0cfca448-ff51-45d5-9a96-e7d306414608", "bridge": "br-int", "label": "tempest-network-smoke--344329804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap932faebb-b2", "ovs_interfaceid": "932faebb-b274-4e17-94a9-9339a27c275f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.058 230187 DEBUG oslo_concurrency.lockutils [req-d53871ab-0dae-4a98-9f78-10bd66941538 req-672799ac-b621-4040-a298-a45dec597189 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-451aa9f7-4cd0-413e-beed-8a30a8685ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.105 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4925MB free_disk=59.94853591918945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.106 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.166 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.167 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.196 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:09:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:09:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2405935886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.653 230187 DEBUG nova.network.neutron [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.657 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.664 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.666 230187 INFO nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Took 2.32 seconds to deallocate network for instance.#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.683 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.708 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.709 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.710 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.710 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:02 np0005532762 nova_compute[230183]: 2025-11-23 21:09:02.749 230187 DEBUG oslo_concurrency.processutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:02.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4237980380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.246 230187 DEBUG oslo_concurrency.processutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.251 230187 DEBUG nova.compute.provider_tree [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.269 230187 DEBUG nova.scheduler.client.report [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.289 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.310 230187 INFO nova.scheduler.client.report [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 451aa9f7-4cd0-413e-beed-8a30a8685ff1#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.373 230187 DEBUG oslo_concurrency.lockutils [None req-04868b03-2ccc-4b7b-9adb-e6dd8cf81b28 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "451aa9f7-4cd0-413e-beed-8a30a8685ff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.448 230187 DEBUG nova.compute.manager [req-6f4138a2-c3c7-4d93-8684-4aed0ead9400 req-f250c723-8653-4c18-bd03-8d73e4b5e62d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Received event network-vif-deleted-932faebb-b274-4e17-94a9-9339a27c275f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:03 np0005532762 nova_compute[230183]: 2025-11-23 21:09:03.697 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:04 np0005532762 nova_compute[230183]: 2025-11-23 21:09:04.702 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:09:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:04.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:09:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:06.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:07 np0005532762 nova_compute[230183]: 2025-11-23 21:09:07.026 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:07 np0005532762 nova_compute[230183]: 2025-11-23 21:09:07.373 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:07 np0005532762 nova_compute[230183]: 2025-11-23 21:09:07.478 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:09 np0005532762 nova_compute[230183]: 2025-11-23 21:09:09.705 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:11 np0005532762 nova_compute[230183]: 2025-11-23 21:09:11.835 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:11.836 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:09:11 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:11.836 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:09:12 np0005532762 nova_compute[230183]: 2025-11-23 21:09:12.060 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:14 np0005532762 nova_compute[230183]: 2025-11-23 21:09:14.593 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932139.5925066, 451aa9f7-4cd0-413e-beed-8a30a8685ff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:09:14 np0005532762 nova_compute[230183]: 2025-11-23 21:09:14.594 230187 INFO nova.compute.manager [-] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:09:14 np0005532762 nova_compute[230183]: 2025-11-23 21:09:14.614 230187 DEBUG nova.compute.manager [None req-4bbdd229-5301-40e6-9baf-078e4cdcaa05 - - - - - -] [instance: 451aa9f7-4cd0-413e-beed-8a30a8685ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:09:14 np0005532762 nova_compute[230183]: 2025-11-23 21:09:14.746 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:16.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:17 np0005532762 nova_compute[230183]: 2025-11-23 21:09:17.061 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:17 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:17.838 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:18 np0005532762 podman[235821]: 2025-11-23 21:09:18.668917392 +0000 UTC m=+0.061252314 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:09:18 np0005532762 podman[235820]: 2025-11-23 21:09:18.688820893 +0000 UTC m=+0.093208667 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 23 16:09:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:18.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:19 np0005532762 nova_compute[230183]: 2025-11-23 21:09:19.760 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:20.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:22 np0005532762 nova_compute[230183]: 2025-11-23 21:09:22.112 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:22 np0005532762 podman[235870]: 2025-11-23 21:09:22.694264781 +0000 UTC m=+0.096678989 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:09:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:24.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:24 np0005532762 nova_compute[230183]: 2025-11-23 21:09:24.763 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:24.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:27 np0005532762 nova_compute[230183]: 2025-11-23 21:09:27.114 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:29 np0005532762 nova_compute[230183]: 2025-11-23 21:09:29.766 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.540 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.540 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.553 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.615 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.616 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.622 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.622 230187 INFO nova.compute.claims [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:09:30 np0005532762 nova_compute[230183]: 2025-11-23 21:09:30.710 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2351416075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.171 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.179 230187 DEBUG nova.compute.provider_tree [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.196 230187 DEBUG nova.scheduler.client.report [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.221 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.222 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.268 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.271 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.272 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.287 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.379 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.380 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.381 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating image(s)#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.403 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.426 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.451 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.453 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.471 230187 DEBUG nova.policy [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.506 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.507 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.530 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.534 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.795 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.865 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.982 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.983 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Ensure instance console log exists: /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.984 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.984 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:31 np0005532762 nova_compute[230183]: 2025-11-23 21:09:31.985 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:32 np0005532762 nova_compute[230183]: 2025-11-23 21:09:32.116 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:32 np0005532762 nova_compute[230183]: 2025-11-23 21:09:32.538 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Successfully created port: 540c04be-373c-41ca-adee-2010345a34df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:09:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:32.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.583 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Successfully updated port: 540c04be-373c-41ca-adee-2010345a34df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.598 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG nova.compute.manager [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG nova.compute.manager [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.662 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:09:33 np0005532762 nova_compute[230183]: 2025-11-23 21:09:33.879 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:09:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.498 230187 DEBUG nova.network.neutron [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.524 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.524 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance network_info: |[{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.525 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.525 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.528 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start _get_guest_xml network_info=[{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.531 230187 WARNING nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.538 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.538 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.544 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.544 230187 DEBUG nova.virt.libvirt.host [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.545 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.546 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.547 230187 DEBUG nova.virt.hardware [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.549 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:34 np0005532762 nova_compute[230183]: 2025-11-23 21:09:34.768 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:09:35 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2460303140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.024 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.043 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.046 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:09:35 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/935451246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.490 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.492 230187 DEBUG nova.virt.libvirt.vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:09:31Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.492 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.493 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.495 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <uuid>227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</uuid>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <name>instance-00000004</name>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-626843533</nova:name>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:09:34</nova:creationTime>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <nova:port uuid="540c04be-373c-41ca-adee-2010345a34df">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="serial">227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="uuid">227fff00-2bf2-4d7a-9ee7-ff4eaddc0880</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:9d:e3:b7"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <target dev="tap540c04be-37"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/console.log" append="off"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:09:35 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:09:35 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:09:35 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:09:35 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Preparing to wait for external event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.497 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.498 230187 DEBUG nova.virt.libvirt.vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:09:31Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.498 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG nova.network.os_vif_util [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG os_vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540c04be-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.503 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap540c04be-37, col_values=(('external_ids', {'iface-id': '540c04be-373c-41ca-adee-2010345a34df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e3:b7', 'vm-uuid': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:35 np0005532762 NetworkManager[49021]: <info>  [1763932175.5056] manager: (tap540c04be-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.507 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.510 230187 INFO os_vif [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37')#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.560 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.560 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.561 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:9d:e3:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.561 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Using config drive#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.582 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.731 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.732 230187 DEBUG nova.network.neutron [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:35 np0005532762 nova_compute[230183]: 2025-11-23 21:09:35.745 230187 DEBUG oslo_concurrency.lockutils [req-96e14edf-44dd-46e5-b9e2-8a008bd77282 req-4fc7f48b-c06c-47cd-ad29-58f8cd17afc9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.089 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Creating config drive at /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.097 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtv24q34 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.240 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvtv24q34" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.272 230187 DEBUG nova.storage.rbd_utils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.277 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:09:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:36 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:36 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.461 230187 DEBUG oslo_concurrency.processutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.462 230187 INFO nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deleting local config drive /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880/disk.config because it was imported into RBD.#033[00m
Nov 23 16:09:36 np0005532762 kernel: tap540c04be-37: entered promiscuous mode
Nov 23 16:09:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:36Z|00064|binding|INFO|Claiming lport 540c04be-373c-41ca-adee-2010345a34df for this chassis.
Nov 23 16:09:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:36Z|00065|binding|INFO|540c04be-373c-41ca-adee-2010345a34df: Claiming fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.516 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.5173] manager: (tap540c04be-37): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.529 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e3:b7 10.100.0.11'], port_security=['fa:16:3e:9d:e3:b7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20b5a6ce-6e21-4158-a0ab-eaca16146e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c0604ff-606a-413a-88a2-c316eba90e56, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=540c04be-373c-41ca-adee-2010345a34df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.531 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 540c04be-373c-41ca-adee-2010345a34df in datapath 6ff6a2ba-50a1-444b-9685-151db9bcac89 bound to our chassis#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.532 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ff6a2ba-50a1-444b-9685-151db9bcac89#033[00m
Nov 23 16:09:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.544 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec1ad7-6ab7-4a4c-a8f4-de1832cfe6e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.546 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ff6a2ba-51 in ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:09:36 np0005532762 systemd-machined[193469]: New machine qemu-3-instance-00000004.
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.548 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ff6a2ba-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.548 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed45358-e343-4785-94c3-586d5ee0ed1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.549 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4669f84a-f4b2-42f6-b5e6-2b0f0bbd7aab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 systemd-udevd[236352]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.563 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd11e1b-7f3b-4f70-9b55-c16cfe9d17ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.5673] device (tap540c04be-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.5681] device (tap540c04be-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:09:36 np0005532762 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.587 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[45ee7ca1-5351-47cc-ae0a-a707a14fe496]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.588 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:36Z|00066|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df ovn-installed in OVS
Nov 23 16:09:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:36Z|00067|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df up in Southbound
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.598 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.620 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[61959e40-4a25-430a-afd4-9eaa5dfdd0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 systemd-udevd[236355]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.6260] manager: (tap6ff6a2ba-50): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.625 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e26efef-3140-43f9-91fc-c229051f77da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.657 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8328f13-f00b-4362-a30a-5ef4f2ff091d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.660 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[70050365-4fe1-4f7e-9ba7-291f84e7b5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.6812] device (tap6ff6a2ba-50): carrier: link connected
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.685 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[adff457c-5ccc-4381-a239-d3b0a7571fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.699 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4b33a3ea-b029-424a-938f-da6570b8221b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ff6a2ba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412525, 'reachable_time': 38939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236384, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.712 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[101cec63-1e81-41ad-8d7c-4a8dda3be381]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e098'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 412525, 'tstamp': 412525}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236385, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.724 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[476fd3bf-1dce-458b-9c94-4e64eba979e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ff6a2ba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e0:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412525, 'reachable_time': 38939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236386, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.750 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6451b22f-a137-4f08-9944-916576f12a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.772 230187 DEBUG nova.compute.manager [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.772 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG oslo_concurrency.lockutils [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.773 230187 DEBUG nova.compute.manager [req-939af7b8-45dc-4ccb-bfe7-97d1dff9730c req-f6569b54-7dae-47e2-b75e-349224716fb9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Processing event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.812 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[82b8287c-7a80-4bde-86c3-0bb567aa7575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ff6a2ba-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.814 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ff6a2ba-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:36 np0005532762 kernel: tap6ff6a2ba-50: entered promiscuous mode
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.816 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:36 np0005532762 NetworkManager[49021]: <info>  [1763932176.8166] manager: (tap6ff6a2ba-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.818 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ff6a2ba-50, col_values=(('external_ids', {'iface-id': '4bff4598-93d2-442e-90fe-19336d84eb93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:09:36 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:36Z|00068|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.831 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.832 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.833 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b03d21-471d-46db-91e0-01bc1f0dea91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.834 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-6ff6a2ba-50a1-444b-9685-151db9bcac89
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/6ff6a2ba-50a1-444b-9685-151db9bcac89.pid.haproxy
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 6ff6a2ba-50a1-444b-9685-151db9bcac89
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:09:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:36.834 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'env', 'PROCESS_TAG=haproxy-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ff6a2ba-50a1-444b-9685-151db9bcac89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:09:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:36.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.996 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.997 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9959097, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:09:36 np0005532762 nova_compute[230183]: 2025-11-23 21:09:36.997 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Started (Lifecycle Event)#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.003 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.006 230187 INFO nova.virt.libvirt.driver [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance spawned successfully.#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.007 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.027 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.033 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.038 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.038 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.039 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.040 230187 DEBUG nova.virt.libvirt.driver [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.077 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.078 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9960365, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.079 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.104 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.106 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932176.9999523, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.107 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.116 230187 INFO nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 5.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.116 230187 DEBUG nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.118 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.125 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.128 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.160 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.178 230187 INFO nova.compute.manager [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 6.58 seconds to build instance.#033[00m
Nov 23 16:09:37 np0005532762 nova_compute[230183]: 2025-11-23 21:09:37.193 230187 DEBUG oslo_concurrency.lockutils [None req-be412014-dc3a-45b4-8042-20983b5c52ac 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:37 np0005532762 podman[236460]: 2025-11-23 21:09:37.204042545 +0000 UTC m=+0.047368814 container create 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 16:09:37 np0005532762 systemd[1]: Started libpod-conmon-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope.
Nov 23 16:09:37 np0005532762 podman[236460]: 2025-11-23 21:09:37.179152202 +0000 UTC m=+0.022478491 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:09:37 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:09:37 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4152db43b6d5104340674417ff7884d350338c590c450707f50593d1fb1c9d99/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:09:37 np0005532762 podman[236460]: 2025-11-23 21:09:37.295611397 +0000 UTC m=+0.138937696 container init 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:09:37 np0005532762 podman[236460]: 2025-11-23 21:09:37.301222257 +0000 UTC m=+0.144548536 container start 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 16:09:37 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : New worker (236481) forked
Nov 23 16:09:37 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : Loading success.
Nov 23 16:09:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.827 230187 DEBUG nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.827 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.828 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.828 230187 DEBUG oslo_concurrency.lockutils [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.829 230187 DEBUG nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:09:38 np0005532762 nova_compute[230183]: 2025-11-23 21:09:38.829 230187 WARNING nova.compute.manager [req-b8d9279e-8d35-452a-afad-0a9398019b39 req-f5abba18-b26c-483b-9ea0-dff01db04c81 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received unexpected event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df for instance with vm_state active and task_state None.#033[00m
Nov 23 16:09:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:39Z|00069|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 16:09:39 np0005532762 nova_compute[230183]: 2025-11-23 21:09:39.756 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:39 np0005532762 NetworkManager[49021]: <info>  [1763932179.7585] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 23 16:09:39 np0005532762 NetworkManager[49021]: <info>  [1763932179.7592] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 23 16:09:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:39Z|00070|binding|INFO|Releasing lport 4bff4598-93d2-442e-90fe-19336d84eb93 from this chassis (sb_readonly=0)
Nov 23 16:09:39 np0005532762 nova_compute[230183]: 2025-11-23 21:09:39.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:39 np0005532762 nova_compute[230183]: 2025-11-23 21:09:39.795 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.923 230187 DEBUG nova.compute.manager [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.923 230187 DEBUG nova.compute.manager [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:09:40 np0005532762 nova_compute[230183]: 2025-11-23 21:09:40.924 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:09:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:42 np0005532762 nova_compute[230183]: 2025-11-23 21:09:42.120 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:09:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:09:43 np0005532762 nova_compute[230183]: 2025-11-23 21:09:43.522 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:09:43 np0005532762 nova_compute[230183]: 2025-11-23 21:09:43.522 230187 DEBUG nova.network.neutron [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:43 np0005532762 nova_compute[230183]: 2025-11-23 21:09:43.547 230187 DEBUG oslo_concurrency.lockutils [req-55def611-d3c7-4406-a3d9-0308faa9c3cb req-169affcb-a493-44da-bc8b-ed2dfde46065 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:09:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:45 np0005532762 nova_compute[230183]: 2025-11-23 21:09:45.539 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:46.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:46.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:47 np0005532762 nova_compute[230183]: 2025-11-23 21:09:47.122 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:49 np0005532762 podman[236524]: 2025-11-23 21:09:49.631073181 +0000 UTC m=+0.046903822 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:09:49 np0005532762 podman[236523]: 2025-11-23 21:09:49.688902503 +0000 UTC m=+0.104350943 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:09:50 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:50Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 16:09:50 np0005532762 ovn_controller[132845]: 2025-11-23T21:09:50Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:e3:b7 10.100.0.11
Nov 23 16:09:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:09:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:09:50 np0005532762 nova_compute[230183]: 2025-11-23 21:09:50.541 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.066 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:09:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:09:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:09:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:09:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:52 np0005532762 nova_compute[230183]: 2025-11-23 21:09:52.124 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:53 np0005532762 podman[236570]: 2025-11-23 21:09:53.664686691 +0000 UTC m=+0.082171152 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 16:09:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:54.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:55 np0005532762 nova_compute[230183]: 2025-11-23 21:09:55.544 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:56.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:09:56 np0005532762 nova_compute[230183]: 2025-11-23 21:09:56.609 230187 INFO nova.compute.manager [None req-0e564f10-93e4-4ea1-9450-01843133a446 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Get console output#033[00m
Nov 23 16:09:56 np0005532762 nova_compute[230183]: 2025-11-23 21:09:56.615 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:09:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:57 np0005532762 nova_compute[230183]: 2025-11-23 21:09:57.126 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:09:57 np0005532762 nova_compute[230183]: 2025-11-23 21:09:57.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:09:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:09:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:09:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:09:58 np0005532762 nova_compute[230183]: 2025-11-23 21:09:58.377 230187 DEBUG nova.compute.manager [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-changed-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:09:58 np0005532762 nova_compute[230183]: 2025-11-23 21:09:58.378 230187 DEBUG nova.compute.manager [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing instance network info cache due to event network-changed-540c04be-373c-41ca-adee-2010345a34df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:09:58 np0005532762 nova_compute[230183]: 2025-11-23 21:09:58.378 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:09:58 np0005532762 nova_compute[230183]: 2025-11-23 21:09:58.379 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:09:58 np0005532762 nova_compute[230183]: 2025-11-23 21:09:58.379 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Refreshing network info cache for port 540c04be-373c-41ca-adee-2010345a34df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:09:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:09:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:09:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:09:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:09:59 np0005532762 nova_compute[230183]: 2025-11-23 21:09:59.331 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated VIF entry in instance network info cache for port 540c04be-373c-41ca-adee-2010345a34df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:09:59 np0005532762 nova_compute[230183]: 2025-11-23 21:09:59.332 230187 DEBUG nova.network.neutron [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:09:59 np0005532762 nova_compute[230183]: 2025-11-23 21:09:59.358 230187 DEBUG oslo_concurrency.lockutils [req-e4b625f9-fa61-4ce7-b6f0-f2f2d6f347fb req-36543de2-0d29-48f1-990e-b68cd29b7f99 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:10:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:00 np0005532762 nova_compute[230183]: 2025-11-23 21:10:00.587 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:00 np0005532762 ceph-mon[80135]: overall HEALTH_OK
Nov 23 16:10:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.425 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:10:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:10:01 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2877369591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.891 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.958 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:10:01 np0005532762 nova_compute[230183]: 2025-11-23 21:10:01.958 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.101 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4752MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.102 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.129 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.170 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.204 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:10:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:02.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:10:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4012904158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.631 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.636 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.652 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.669 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:10:02 np0005532762 nova_compute[230183]: 2025-11-23 21:10:02.669 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.670 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.785 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.785 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.786 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:10:03 np0005532762 nova_compute[230183]: 2025-11-23 21:10:03.786 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:10:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:05 np0005532762 nova_compute[230183]: 2025-11-23 21:10:05.590 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 23 16:10:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:06.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.130 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.798 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [{"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.826 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.826 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.827 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.828 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:07 np0005532762 nova_compute[230183]: 2025-11-23 21:10:07.828 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:08.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:09 np0005532762 nova_compute[230183]: 2025-11-23 21:10:09.582 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:10.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:10 np0005532762 nova_compute[230183]: 2025-11-23 21:10:10.643 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:11.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:12 np0005532762 nova_compute[230183]: 2025-11-23 21:10:12.131 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:12.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:14.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:15.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:15 np0005532762 nova_compute[230183]: 2025-11-23 21:10:15.645 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:16.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:17.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:17 np0005532762 nova_compute[230183]: 2025-11-23 21:10:17.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:18.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:18 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:18.394 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:10:18 np0005532762 nova_compute[230183]: 2025-11-23 21:10:18.395 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:18 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:18.396 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:10:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:19.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:20 np0005532762 podman[236676]: 2025-11-23 21:10:20.43970292 +0000 UTC m=+0.058717017 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:10:20 np0005532762 podman[236675]: 2025-11-23 21:10:20.478668779 +0000 UTC m=+0.097161882 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:10:20 np0005532762 nova_compute[230183]: 2025-11-23 21:10:20.647 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:21 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:22 np0005532762 nova_compute[230183]: 2025-11-23 21:10:22.134 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:24 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:24.398 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:10:24 np0005532762 podman[236723]: 2025-11-23 21:10:24.667662406 +0000 UTC m=+0.084572407 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 23 16:10:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:25.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:25 np0005532762 nova_compute[230183]: 2025-11-23 21:10:25.651 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:26 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:27.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:27 np0005532762 nova_compute[230183]: 2025-11-23 21:10:27.135 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:29.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:30 np0005532762 nova_compute[230183]: 2025-11-23 21:10:30.654 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:31.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:32 np0005532762 nova_compute[230183]: 2025-11-23 21:10:32.137 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:32.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:34.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:10:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s#012Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:10:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:35.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:35 np0005532762 nova_compute[230183]: 2025-11-23 21:10:35.656 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:36.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:37 np0005532762 nova_compute[230183]: 2025-11-23 21:10:37.244 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:38.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:10:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.521 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.522 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.523 230187 INFO nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Terminating instance#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.524 230187 DEBUG nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:10:39 np0005532762 kernel: tap540c04be-37 (unregistering): left promiscuous mode
Nov 23 16:10:39 np0005532762 NetworkManager[49021]: <info>  [1763932239.5746] device (tap540c04be-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:10:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:10:39Z|00071|binding|INFO|Releasing lport 540c04be-373c-41ca-adee-2010345a34df from this chassis (sb_readonly=0)
Nov 23 16:10:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:10:39Z|00072|binding|INFO|Setting lport 540c04be-373c-41ca-adee-2010345a34df down in Southbound
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.584 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:10:39Z|00073|binding|INFO|Removing iface tap540c04be-37 ovn-installed in OVS
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.586 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.595 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e3:b7 10.100.0.11'], port_security=['fa:16:3e:9d:e3:b7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '227fff00-2bf2-4d7a-9ee7-ff4eaddc0880', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20b5a6ce-6e21-4158-a0ab-eaca16146e81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c0604ff-606a-413a-88a2-c316eba90e56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=540c04be-373c-41ca-adee-2010345a34df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.596 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 540c04be-373c-41ca-adee-2010345a34df in datapath 6ff6a2ba-50a1-444b-9685-151db9bcac89 unbound from our chassis#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.597 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ff6a2ba-50a1-444b-9685-151db9bcac89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.598 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dd43d2-b990-4b4e-b80a-eb47dadbabfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.598 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 namespace which is not needed anymore#033[00m
Nov 23 16:10:39 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:39 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:39 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.602 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532762 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 15.742s CPU time.
Nov 23 16:10:39 np0005532762 systemd-machined[193469]: Machine qemu-3-instance-00000004 terminated.
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : haproxy version is 2.8.14-c23fe91
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [NOTICE]   (236479) : path to executable is /usr/sbin/haproxy
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : Exiting Master process...
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : Exiting Master process...
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [ALERT]    (236479) : Current worker (236481) exited with code 143 (Terminated)
Nov 23 16:10:39 np0005532762 neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89[236475]: [WARNING]  (236479) : All workers exited. Exiting... (0)
Nov 23 16:10:39 np0005532762 systemd[1]: libpod-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532762 podman[236953]: 2025-11-23 21:10:39.729403934 +0000 UTC m=+0.045952367 container died 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.756 230187 INFO nova.virt.libvirt.driver [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Instance destroyed successfully.#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.758 230187 DEBUG nova.objects.instance [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:10:39 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5-userdata-shm.mount: Deactivated successfully.
Nov 23 16:10:39 np0005532762 systemd[1]: var-lib-containers-storage-overlay-4152db43b6d5104340674417ff7884d350338c590c450707f50593d1fb1c9d99-merged.mount: Deactivated successfully.
Nov 23 16:10:39 np0005532762 podman[236953]: 2025-11-23 21:10:39.78097867 +0000 UTC m=+0.097527043 container cleanup 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.784 230187 DEBUG nova.virt.libvirt.vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-23T21:09:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-626843533',display_name='tempest-TestNetworkBasicOps-server-626843533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-626843533',id=4,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzFKgfz1QVXAYBgw9WYLDmImQIyNZIUJvYaUSeZsmfvEoA7CUytAymkLL0tqBwm8cJVrzUl6E9R6D/qdooFrc51SiAGOyjiHvRBM9c3gaFOzuWbTw1Aa3lZ7MmCQiSUEQ==',key_name='tempest-TestNetworkBasicOps-1952591884',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:09:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mabh37mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:09:37Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=227fff00-2bf2-4d7a-9ee7-ff4eaddc0880,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.785 230187 DEBUG nova.network.os_vif_util [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "540c04be-373c-41ca-adee-2010345a34df", "address": "fa:16:3e:9d:e3:b7", "network": {"id": "6ff6a2ba-50a1-444b-9685-151db9bcac89", "bridge": "br-int", "label": "tempest-network-smoke--285822202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap540c04be-37", "ovs_interfaceid": "540c04be-373c-41ca-adee-2010345a34df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.785 230187 DEBUG nova.network.os_vif_util [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.786 230187 DEBUG os_vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.787 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.787 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540c04be-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 systemd[1]: libpod-conmon-21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5.scope: Deactivated successfully.
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.793 230187 INFO os_vif [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e3:b7,bridge_name='br-int',has_traffic_filtering=True,id=540c04be-373c-41ca-adee-2010345a34df,network=Network(6ff6a2ba-50a1-444b-9685-151db9bcac89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap540c04be-37')#033[00m
Nov 23 16:10:39 np0005532762 podman[236991]: 2025-11-23 21:10:39.844196307 +0000 UTC m=+0.039453214 container remove 21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.849 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[81009a2f-8b2f-4df3-a7ca-62bca3a33332]: (4, ('Sun Nov 23 09:10:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 (21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5)\n21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5\nSun Nov 23 09:10:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 (21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5)\n21203f9aaa6cd9549801a5961e8c28a8bfc893b9cd6658b7ad005ac54c1b96c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.850 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9b63c37f-4d60-408b-9ec1-2c4c6f52e2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.851 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ff6a2ba-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.852 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 kernel: tap6ff6a2ba-50: left promiscuous mode
Nov 23 16:10:39 np0005532762 nova_compute[230183]: 2025-11-23 21:10:39.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.868 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcfc70d-0e22-4104-8ca0-2b4d4373395e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.880 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dc873c0c-29f2-4828-956b-ed970aee6523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.881 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7b60759d-04bc-47e3-8784-72017acfc9a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.893 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8142c5d0-22fd-44f1-a6f6-c9047426f772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412518, 'reachable_time': 44784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237024, 'error': None, 'target': 'ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.895 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ff6a2ba-50a1-444b-9685-151db9bcac89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:10:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:39.895 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[bd59d1ee-30df-4f61-856a-b2325a29b228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:10:39 np0005532762 systemd[1]: run-netns-ovnmeta\x2d6ff6a2ba\x2d50a1\x2d444b\x2d9685\x2d151db9bcac89.mount: Deactivated successfully.
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.245 230187 INFO nova.virt.libvirt.driver [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deleting instance files /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_del#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.246 230187 INFO nova.virt.libvirt.driver [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deletion of /var/lib/nova/instances/227fff00-2bf2-4d7a-9ee7-ff4eaddc0880_del complete#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.346 230187 INFO nova.compute.manager [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.346 230187 DEBUG oslo.service.loopingcall [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.347 230187 DEBUG nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.347 230187 DEBUG nova.network.neutron [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.352 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.352 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG oslo_concurrency.lockutils [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:10:40 np0005532762 nova_compute[230183]: 2025-11-23 21:10:40.353 230187 DEBUG nova.compute.manager [req-04a716ff-99bb-4a83-a119-8ab16e95fee1 req-3d25a8a4-e174-42ff-9ade-610bd5825b92 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-unplugged-540c04be-373c-41ca-adee-2010345a34df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 16:10:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:42 np0005532762 nova_compute[230183]: 2025-11-23 21:10:42.247 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.299 230187 DEBUG nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.299 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG oslo_concurrency.lockutils [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 DEBUG nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] No waiting events found dispatching network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.300 230187 WARNING nova.compute.manager [req-dc4e1632-79dd-4f3a-9ff8-cebabc78a014 req-788fb01a-3fa9-4773-b5fe-2fa1157a5e0e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received unexpected event network-vif-plugged-540c04be-373c-41ca-adee-2010345a34df for instance with vm_state active and task_state deleting.#033[00m
Nov 23 16:10:44 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:44 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:10:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:44 np0005532762 nova_compute[230183]: 2025-11-23 21:10:44.821 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:45.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.418 230187 DEBUG nova.network.neutron [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.434 230187 INFO nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Took 5.09 seconds to deallocate network for instance.#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.473 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.474 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.487 230187 DEBUG nova.compute.manager [req-49783259-4253-47e5-af24-a0641ab302dc req-a7315484-9869-4f92-8b32-fe39f240c204 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Received event network-vif-deleted-540c04be-373c-41ca-adee-2010345a34df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.546 230187 DEBUG oslo_concurrency.processutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:10:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:10:45 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3590546491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.977 230187 DEBUG oslo_concurrency.processutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:10:45 np0005532762 nova_compute[230183]: 2025-11-23 21:10:45.983 230187 DEBUG nova.compute.provider_tree [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:10:46 np0005532762 nova_compute[230183]: 2025-11-23 21:10:46.007 230187 DEBUG nova.scheduler.client.report [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:10:46 np0005532762 nova_compute[230183]: 2025-11-23 21:10:46.028 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:46 np0005532762 nova_compute[230183]: 2025-11-23 21:10:46.072 230187 INFO nova.scheduler.client.report [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880#033[00m
Nov 23 16:10:46 np0005532762 nova_compute[230183]: 2025-11-23 21:10:46.173 230187 DEBUG oslo_concurrency.lockutils [None req-952a17b6-6ba8-4ea1-8dc1-8429f7dca130 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "227fff00-2bf2-4d7a-9ee7-ff4eaddc0880" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:47 np0005532762 nova_compute[230183]: 2025-11-23 21:10:47.249 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:49 np0005532762 nova_compute[230183]: 2025-11-23 21:10:49.824 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:50 np0005532762 podman[237104]: 2025-11-23 21:10:50.660844111 +0000 UTC m=+0.065984631 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:10:50 np0005532762 podman[237103]: 2025-11-23 21:10:50.69564274 +0000 UTC m=+0.099722071 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 16:10:50 np0005532762 nova_compute[230183]: 2025-11-23 21:10:50.770 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:50 np0005532762 nova_compute[230183]: 2025-11-23 21:10:50.887 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:10:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:10:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:10:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:10:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:52 np0005532762 nova_compute[230183]: 2025-11-23 21:10:52.251 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:54.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:54 np0005532762 nova_compute[230183]: 2025-11-23 21:10:54.755 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932239.7533598, 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:10:54 np0005532762 nova_compute[230183]: 2025-11-23 21:10:54.755 230187 INFO nova.compute.manager [-] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:10:54 np0005532762 nova_compute[230183]: 2025-11-23 21:10:54.771 230187 DEBUG nova.compute.manager [None req-6c57b200-e3f2-40af-a01c-edaec737378d - - - - - -] [instance: 227fff00-2bf2-4d7a-9ee7-ff4eaddc0880] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:10:54 np0005532762 nova_compute[230183]: 2025-11-23 21:10:54.826 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:55.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:55 np0005532762 podman[237150]: 2025-11-23 21:10:55.656250612 +0000 UTC m=+0.062060206 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:10:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:10:57 np0005532762 nova_compute[230183]: 2025-11-23 21:10:57.252 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:10:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:10:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:10:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:10:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:58 np0005532762 nova_compute[230183]: 2025-11-23 21:10:58.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:10:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:10:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:10:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:10:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:10:59 np0005532762 nova_compute[230183]: 2025-11-23 21:10:59.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:01.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:11:01 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370529717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:11:01 np0005532762 nova_compute[230183]: 2025-11-23 21:11:01.905 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.046 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4932MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.100 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.100 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.119 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.256 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:11:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/413225064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.565 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.570 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.582 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.608 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:11:02 np0005532762 nova_compute[230183]: 2025-11-23 21:11:02.608 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.604 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.605 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.624 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:03 np0005532762 nova_compute[230183]: 2025-11-23 21:11:03.625 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:11:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:04 np0005532762 nova_compute[230183]: 2025-11-23 21:11:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:04 np0005532762 nova_compute[230183]: 2025-11-23 21:11:04.832 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:05.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:11:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5424 writes, 28K keys, 5424 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5424 writes, 5424 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1496 writes, 6941 keys, 1496 commit groups, 1.0 writes per commit group, ingest: 16.65 MB, 0.03 MB/s#012Interval WAL: 1496 writes, 1496 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     56.6      0.72              0.10        14    0.051       0      0       0.0       0.0#012  L6      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2     86.3     74.0      2.29              0.45        13    0.176     67K   6874       0.0       0.0#012 Sum      1/0   12.44 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2     65.7     69.8      3.01              0.55        27    0.111     67K   6874       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     58.6     58.2      1.03              0.17         8    0.129     23K   2050       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     86.3     74.0      2.29              0.45        13    0.176     67K   6874       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     56.7      0.72              0.10        13    0.055       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 3.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 14.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(795,14.18 MB,4.66492%) FilterBlock(27,201.42 KB,0.0647043%) IndexBlock(27,359.39 KB,0.11545%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 16:11:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:06.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:07 np0005532762 nova_compute[230183]: 2025-11-23 21:11:07.255 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:08.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.571 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.572 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.585 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.669 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.670 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.675 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.675 230187 INFO nova.compute.claims [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:11:08 np0005532762 nova_compute[230183]: 2025-11-23 21:11:08.759 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:11:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4195050678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.185 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.192 230187 DEBUG nova.compute.provider_tree [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.207 230187 DEBUG nova.scheduler.client.report [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.226 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.227 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.270 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.270 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.284 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:11:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.298 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.408 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.410 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.410 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating image(s)#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.433 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.466 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.499 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.504 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.528 230187 DEBUG nova.policy [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.566 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.566 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.567 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.567 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.591 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.594 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.835 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.893 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:09 np0005532762 nova_compute[230183]: 2025-11-23 21:11:09.979 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.100 230187 DEBUG nova.objects.instance [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.120 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.120 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Ensure instance console log exists: /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.121 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.121 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.122 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:10 np0005532762 nova_compute[230183]: 2025-11-23 21:11:10.195 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully created port: bdbb1df8-a028-4685-9661-24563619eb80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:11:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:10.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:11.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.623 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully updated port: bdbb1df8-a028-4685-9661-24563619eb80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.671 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.672 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.672 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG nova.compute.manager [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG nova.compute.manager [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.741 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.758140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271758186, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2373, "num_deletes": 251, "total_data_size": 6217556, "memory_usage": 6292320, "flush_reason": "Manual Compaction"}
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271805614, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 4048390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26129, "largest_seqno": 28497, "table_properties": {"data_size": 4038870, "index_size": 5950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19848, "raw_average_key_size": 20, "raw_value_size": 4019905, "raw_average_value_size": 4118, "num_data_blocks": 261, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932059, "oldest_key_time": 1763932059, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 47771 microseconds, and 8905 cpu microseconds.
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.805844) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 4048390 bytes OK
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.806046) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807841) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807919) EVENT_LOG_v1 {"time_micros": 1763932271807910, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.807944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6207102, prev total WAL file size 6207102, number of live WAL files 2.
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3953KB)], [51(12MB)]
Nov 23 16:11:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932271809686, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 17091173, "oldest_snapshot_seqno": -1}
Nov 23 16:11:11 np0005532762 nova_compute[230183]: 2025-11-23 21:11:11.824 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5847 keys, 14928601 bytes, temperature: kUnknown
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272082717, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14928601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14888535, "index_size": 24340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148729, "raw_average_key_size": 25, "raw_value_size": 14782089, "raw_average_value_size": 2528, "num_data_blocks": 994, "num_entries": 5847, "num_filter_entries": 5847, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.083208) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14928601 bytes
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.085914) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.6 rd, 54.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6365, records dropped: 518 output_compression: NoCompression
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.085943) EVENT_LOG_v1 {"time_micros": 1763932272085929, "job": 30, "event": "compaction_finished", "compaction_time_micros": 273219, "compaction_time_cpu_micros": 50926, "output_level": 6, "num_output_files": 1, "total_output_size": 14928601, "num_input_records": 6365, "num_output_records": 5847, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272087433, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932272092268, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:11.809372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:11:12.092527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:11:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.257 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:12.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.862 230187 DEBUG nova.network.neutron [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.883 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.883 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance network_info: |[{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.884 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.884 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.886 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start _get_guest_xml network_info=[{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.890 230187 WARNING nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.897 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.897 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.901 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.901 230187 DEBUG nova.virt.libvirt.host [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.902 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.903 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.904 230187 DEBUG nova.virt.hardware [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:11:12 np0005532762 nova_compute[230183]: 2025-11-23 21:11:12.906 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:11:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2203487789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.395 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.419 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.422 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:11:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1161995047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.875 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.877 230187 DEBUG nova.virt.libvirt.vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:11:09Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.877 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.878 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.879 230187 DEBUG nova.objects.instance [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.895 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <name>instance-00000006</name>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:11:12</nova:creationTime>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="serial">4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="uuid">4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:f3:c9:f4"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <target dev="tapbdbb1df8-a0"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log" append="off"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:11:13 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:11:13 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:11:13 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:11:13 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.897 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Preparing to wait for external event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.897 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.898 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.898 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.899 230187 DEBUG nova.virt.libvirt.vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:11:09Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.900 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.900 230187 DEBUG nova.network.os_vif_util [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.901 230187 DEBUG os_vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.902 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.902 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.903 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.908 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.909 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdbb1df8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.909 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdbb1df8-a0, col_values=(('external_ids', {'iface-id': 'bdbb1df8-a028-4685-9661-24563619eb80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:c9:f4', 'vm-uuid': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.911 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:13 np0005532762 NetworkManager[49021]: <info>  [1763932273.9123] manager: (tapbdbb1df8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.914 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.917 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.918 230187 INFO os_vif [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0')#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.975 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:f3:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:11:13 np0005532762 nova_compute[230183]: 2025-11-23 21:11:13.976 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Using config drive#033[00m
Nov 23 16:11:14 np0005532762 nova_compute[230183]: 2025-11-23 21:11:14.001 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:14.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.242 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Creating config drive at /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.251 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sm8t84h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.378 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sm8t84h" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.409 230187 DEBUG nova.storage.rbd_utils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.412 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.574 230187 DEBUG oslo_concurrency.processutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.576 230187 INFO nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deleting local config drive /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/disk.config because it was imported into RBD.#033[00m
Nov 23 16:11:15 np0005532762 kernel: tapbdbb1df8-a0: entered promiscuous mode
Nov 23 16:11:15 np0005532762 NetworkManager[49021]: <info>  [1763932275.6185] manager: (tapbdbb1df8-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:15 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:15Z|00074|binding|INFO|Claiming lport bdbb1df8-a028-4685-9661-24563619eb80 for this chassis.
Nov 23 16:11:15 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:15Z|00075|binding|INFO|bdbb1df8-a028-4685-9661-24563619eb80: Claiming fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.623 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.625 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:15 np0005532762 systemd-machined[193469]: New machine qemu-4-instance-00000006.
Nov 23 16:11:15 np0005532762 systemd-udevd[237574]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:11:15 np0005532762 NetworkManager[49021]: <info>  [1763932275.6615] device (tapbdbb1df8-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:11:15 np0005532762 NetworkManager[49021]: <info>  [1763932275.6625] device (tapbdbb1df8-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:11:15 np0005532762 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.686 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:15 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:15Z|00076|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 ovn-installed in OVS
Nov 23 16:11:15 np0005532762 nova_compute[230183]: 2025-11-23 21:11:15.690 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:15 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:15Z|00077|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 up in Southbound
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.760 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c9:f4 10.100.0.12'], port_security=['fa:16:3e:f3:c9:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa502c12-d22c-490c-942b-57c2b1624866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30b87ecc-e7bf-46f1-a605-8bcfe0ecba45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8207d226-2b2e-4ad5-9d7b-3777cdc61652, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=bdbb1df8-a028-4685-9661-24563619eb80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.761 142158 INFO neutron.agent.ovn.metadata.agent [-] Port bdbb1df8-a028-4685-9661-24563619eb80 in datapath aa502c12-d22c-490c-942b-57c2b1624866 bound to our chassis#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.762 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa502c12-d22c-490c-942b-57c2b1624866#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.775 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3a59c1-d9a7-494a-9848-ef30b4beee3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.775 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa502c12-d1 in ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.777 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa502c12-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.777 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f8a4f1-ccb6-45b1-9d1e-82a6a9050258]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.778 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddb0a23-e88e-4032-b4de-e39a5dda4e1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.794 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[890f846b-84af-475b-a06e-b1b848f48ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.815 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc828ce-d9ff-47cf-88c2-c3d7002a0e71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.844 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[943736b8-65ec-4de1-b9aa-fc7997494611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.853 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0b953b2f-4e56-43e7-bb48-f9167a836c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 NetworkManager[49021]: <info>  [1763932275.8545] manager: (tapaa502c12-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Nov 23 16:11:15 np0005532762 systemd-udevd[237576]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.885 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0d20ff-959d-4ce7-a429-215a4e792f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.888 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdb29f8-380a-42d9-8990-1efc07d0059e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 NetworkManager[49021]: <info>  [1763932275.9078] device (tapaa502c12-d0): carrier: link connected
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.913 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3bb504-75a5-4299-942a-3cb199d8337b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.930 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2420b9-6459-440f-9b61-b40282b742d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa502c12-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:8b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422448, 'reachable_time': 42322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237609, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.945 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c61bb352-ea26-4ce8-aab6-c7ec918d7af9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:8b05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 422448, 'tstamp': 422448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237610, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.961 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[79b8e250-f04c-4d9f-b028-c96146080ec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa502c12-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:8b:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422448, 'reachable_time': 42322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237611, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:15.996 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[48b9365b-e4ce-4c77-963f-770a9ca0daca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.035 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.036 230187 DEBUG nova.network.neutron [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.050 230187 DEBUG oslo_concurrency.lockutils [req-8f870fea-b29d-4d8a-88bb-e6f622589e54 req-3305a24d-d027-469c-824c-297bfc5c47ec 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.061 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf21341d-a96b-497d-993c-1b9452b4a282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.062 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa502c12-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.062 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.063 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa502c12-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.064 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:16 np0005532762 NetworkManager[49021]: <info>  [1763932276.0652] manager: (tapaa502c12-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 23 16:11:16 np0005532762 kernel: tapaa502c12-d0: entered promiscuous mode
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.067 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.068 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa502c12-d0, col_values=(('external_ids', {'iface-id': '882afaa1-9000-493d-808e-b1d906b6e642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:16 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:16Z|00078|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.083 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.084 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.085 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[587800fc-69d0-4faf-be0e-ae59fe288aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.085 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-aa502c12-d22c-490c-942b-57c2b1624866
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/aa502c12-d22c-490c-942b-57c2b1624866.pid.haproxy
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID aa502c12-d22c-490c-942b-57c2b1624866
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:11:16 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:16.086 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'env', 'PROCESS_TAG=haproxy-aa502c12-d22c-490c-942b-57c2b1624866', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa502c12-d22c-490c-942b-57c2b1624866.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.202 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.2016847, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.202 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Started (Lifecycle Event)#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.235 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.238 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.2026641, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.239 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG nova.compute.manager [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.242 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.243 230187 DEBUG oslo_concurrency.lockutils [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.243 230187 DEBUG nova.compute.manager [req-16f741ad-7acf-4753-8335-13f6e75cb21d req-ca2f103e-0009-4fa9-bdc2-a779a773660f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Processing event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.244 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.247 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.249 230187 INFO nova.virt.libvirt.driver [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance spawned successfully.#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.250 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.254 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.258 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932276.246582, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.259 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.267 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.267 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.268 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.268 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.269 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.270 230187 DEBUG nova.virt.libvirt.driver [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.276 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.280 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.312 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.343 230187 INFO nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 6.93 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.345 230187 DEBUG nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.411 230187 INFO nova.compute.manager [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 7.78 seconds to build instance.#033[00m
Nov 23 16:11:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:16.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:16 np0005532762 nova_compute[230183]: 2025-11-23 21:11:16.427 230187 DEBUG oslo_concurrency.lockutils [None req-afc50ea9-eb66-46e5-9b21-3277793c9b00 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:16 np0005532762 podman[237685]: 2025-11-23 21:11:16.421107903 +0000 UTC m=+0.031649625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:11:16 np0005532762 podman[237685]: 2025-11-23 21:11:16.691458756 +0000 UTC m=+0.302000418 container create ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:11:16 np0005532762 systemd[1]: Started libpod-conmon-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope.
Nov 23 16:11:16 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:11:16 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d0c50b76b192f5ce5a4fd663eee6064b85b526a900eeee678e7ce0a629a71ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:16 np0005532762 podman[237685]: 2025-11-23 21:11:16.83177424 +0000 UTC m=+0.442315862 container init ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 16:11:16 np0005532762 podman[237685]: 2025-11-23 21:11:16.83778646 +0000 UTC m=+0.448328092 container start ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:11:16 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : New worker (237706) forked
Nov 23 16:11:16 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : Loading success.
Nov 23 16:11:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:17 np0005532762 nova_compute[230183]: 2025-11-23 21:11:17.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.384 230187 DEBUG nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.389 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.395 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.399 230187 DEBUG oslo_concurrency.lockutils [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.402 230187 DEBUG nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.404 230187 WARNING nova.compute.manager [req-93eecc34-08b6-4a1d-926d-7359a7a4080c req-d40072df-e965-4802-b6cc-b580d2ede504 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with vm_state active and task_state None.#033[00m
Nov 23 16:11:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:18.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:18 np0005532762 nova_compute[230183]: 2025-11-23 21:11:18.912 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:20.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:21.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:21 np0005532762 NetworkManager[49021]: <info>  [1763932281.3712] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 23 16:11:21 np0005532762 NetworkManager[49021]: <info>  [1763932281.3720] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 23 16:11:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:21Z|00079|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.372 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:21 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:21Z|00080|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.425 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:21 np0005532762 podman[237720]: 2025-11-23 21:11:21.731929929 +0000 UTC m=+0.136230996 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:11:21 np0005532762 podman[237719]: 2025-11-23 21:11:21.747140304 +0000 UTC m=+0.164613162 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.819 230187 DEBUG nova.compute.manager [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.819 230187 DEBUG nova.compute.manager [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:11:21 np0005532762 nova_compute[230183]: 2025-11-23 21:11:21.820 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:11:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:22 np0005532762 nova_compute[230183]: 2025-11-23 21:11:22.264 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:22.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:23 np0005532762 nova_compute[230183]: 2025-11-23 21:11:23.055 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:11:23 np0005532762 nova_compute[230183]: 2025-11-23 21:11:23.057 230187 DEBUG nova.network.neutron [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:11:23 np0005532762 nova_compute[230183]: 2025-11-23 21:11:23.073 230187 DEBUG oslo_concurrency.lockutils [req-9a81574b-2545-4e44-892e-14857fba333e req-f760c495-ed77-4adc-a48a-bfb1781d5806 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:11:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:23 np0005532762 nova_compute[230183]: 2025-11-23 21:11:23.914 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:24.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:25.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:26 np0005532762 podman[237789]: 2025-11-23 21:11:26.658953335 +0000 UTC m=+0.065571380 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:11:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:27 np0005532762 nova_compute[230183]: 2025-11-23 21:11:27.265 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:27.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:28 np0005532762 nova_compute[230183]: 2025-11-23 21:11:28.916 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:30 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:30Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 16:11:30 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:30Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:c9:f4 10.100.0.12
Nov 23 16:11:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:31.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:32 np0005532762 nova_compute[230183]: 2025-11-23 21:11:32.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:33.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:33 np0005532762 nova_compute[230183]: 2025-11-23 21:11:33.951 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:36 np0005532762 nova_compute[230183]: 2025-11-23 21:11:36.237 230187 INFO nova.compute.manager [None req-0ae311a4-a435-4e3e-a940-7dd84b2a1769 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Get console output#033[00m
Nov 23 16:11:36 np0005532762 nova_compute[230183]: 2025-11-23 21:11:36.241 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:11:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:36.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:36.735 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:11:36 np0005532762 nova_compute[230183]: 2025-11-23 21:11:36.736 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:36 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:36.737 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:11:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:37 np0005532762 nova_compute[230183]: 2025-11-23 21:11:37.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:37.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:38 np0005532762 nova_compute[230183]: 2025-11-23 21:11:38.702 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:38 np0005532762 nova_compute[230183]: 2025-11-23 21:11:38.703 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:38 np0005532762 nova_compute[230183]: 2025-11-23 21:11:38.704 230187 DEBUG nova.objects.instance [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:11:38 np0005532762 nova_compute[230183]: 2025-11-23 21:11:38.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:39 np0005532762 nova_compute[230183]: 2025-11-23 21:11:39.049 230187 DEBUG nova.objects.instance [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:11:39 np0005532762 nova_compute[230183]: 2025-11-23 21:11:39.058 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:11:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:39.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:39 np0005532762 nova_compute[230183]: 2025-11-23 21:11:39.410 230187 DEBUG nova.policy [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:11:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:41 np0005532762 nova_compute[230183]: 2025-11-23 21:11:41.257 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully created port: 9852de9e-899c-4a7c-8268-07fee5003eac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:11:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.269 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.488 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Successfully updated port: 9852de9e-899c-4a7c-8268-07fee5003eac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.500 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.500 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.501 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.625 230187 DEBUG nova.compute.manager [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.626 230187 DEBUG nova.compute.manager [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:11:42 np0005532762 nova_compute[230183]: 2025-11-23 21:11:42.626 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:11:42 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:42.739 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:43 np0005532762 nova_compute[230183]: 2025-11-23 21:11:43.995 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.413 230187 DEBUG nova.network.neutron [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.433 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.434 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.434 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.437 230187 DEBUG nova.virt.libvirt.vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.437 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG os_vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.438 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.439 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.439 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9852de9e-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.441 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9852de9e-89, col_values=(('external_ids', {'iface-id': '9852de9e-899c-4a7c-8268-07fee5003eac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:9a:cf', 'vm-uuid': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.443 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.4442] manager: (tap9852de9e-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.449 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.450 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.451 230187 INFO os_vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.451 230187 DEBUG nova.virt.libvirt.vif [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.452 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.452 230187 DEBUG nova.network.os_vif_util [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:11:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.455 230187 DEBUG nova.virt.libvirt.guest [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] attach device xml: <interface type="ethernet">
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <target dev="tap9852de9e-89"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:11:44 np0005532762 nova_compute[230183]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 23 16:11:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:44.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:44 np0005532762 kernel: tap9852de9e-89: entered promiscuous mode
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.4664] manager: (tap9852de9e-89): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.467 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:44Z|00081|binding|INFO|Claiming lport 9852de9e-899c-4a7c-8268-07fee5003eac for this chassis.
Nov 23 16:11:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:44Z|00082|binding|INFO|9852de9e-899c-4a7c-8268-07fee5003eac: Claiming fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.475 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:9a:cf 10.100.0.23'], port_security=['fa:16:3e:1a:9a:cf 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22c132b-3565-4344-9558-f1d93c19cb57, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=9852de9e-899c-4a7c-8268-07fee5003eac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.477 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 9852de9e-899c-4a7c-8268-07fee5003eac in datapath a53cafa8-a74e-467c-9117-a31bd6c650ae bound to our chassis#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.478 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a53cafa8-a74e-467c-9117-a31bd6c650ae#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.498 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f88ddc-a394-4add-91ed-400218c68b6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.499 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa53cafa8-a1 in ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.501 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa53cafa8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.502 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8c407176-758c-45a1-9f71-57e81287c5fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.502 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0a8e47-7c39-463f-988d-26e4ad6dcff5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 systemd-udevd[237877]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.515 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[383c9d50-c97e-475f-a1e5-c35c09d77d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:44Z|00083|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac ovn-installed in OVS
Nov 23 16:11:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:44Z|00084|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac up in Southbound
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.521 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.5261] device (tap9852de9e-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.5275] device (tap9852de9e-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.541 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cad880d7-0982-4bd9-b126-a7a5e274b07c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.560 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:f3:c9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.562 230187 DEBUG nova.virt.libvirt.driver [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:1a:9a:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.574 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc34fb8-2db9-4b3c-8ba2-6a29ce89e412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.580 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7faea997-9739-408d-9add-c3e9d9b37b88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.5823] manager: (tapa53cafa8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.591 230187 DEBUG nova.virt.libvirt.guest [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:11:44 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 16:11:44 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:11:44 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:11:44 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:11:44 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.610 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcfa517-d2b0-4e41-b3ea-7ad9f6c82846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.614 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[d937d89c-b04b-4267-afad-01911657498c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.624 230187 DEBUG oslo_concurrency.lockutils [None req-fe813a76-3a1c-426f-b99d-a81fac7fed20 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.6342] device (tapa53cafa8-a0): carrier: link connected
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.641 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb8c8e0-a573-4de4-9cd2-e501ae4646c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.657 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[eba859da-d21c-42aa-bff4-5d75306793e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa53cafa8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:b5:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425320, 'reachable_time': 27361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237919, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.670 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7825ae70-91fe-499d-bf5a-f347cf28aad7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:b52b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425320, 'tstamp': 425320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237921, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.687 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0dbd36-150f-4fba-a273-93ca6cebe021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa53cafa8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:b5:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425320, 'reachable_time': 27361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237922, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.718 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ff984d83-b34a-4d7a-83a5-4855eab7821d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.773 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdd7cb4-ebe0-4b66-84e5-3f5cfff3d0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.774 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa53cafa8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.774 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.775 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa53cafa8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.776 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 NetworkManager[49021]: <info>  [1763932304.7773] manager: (tapa53cafa8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 23 16:11:44 np0005532762 kernel: tapa53cafa8-a0: entered promiscuous mode
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.779 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.780 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa53cafa8-a0, col_values=(('external_ids', {'iface-id': 'cab0b4e0-79b2-41b3-92b4-7053f2aab9f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.781 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:44 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:44Z|00085|binding|INFO|Releasing lport cab0b4e0-79b2-41b3-92b4-7053f2aab9f8 from this chassis (sb_readonly=0)
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.784 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.785 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5366df-1dc4-4aa5-a718-d2241a83d58a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.786 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-a53cafa8-a74e-467c-9117-a31bd6c650ae
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/a53cafa8-a74e-467c-9117-a31bd6c650ae.pid.haproxy
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID a53cafa8-a74e-467c-9117-a31bd6c650ae
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:11:44 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:44.787 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'env', 'PROCESS_TAG=haproxy-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a53cafa8-a74e-467c-9117-a31bd6c650ae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.789 230187 DEBUG nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG oslo_concurrency.lockutils [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.790 230187 DEBUG nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.791 230187 WARNING nova.compute.manager [req-9242f190-b0b1-4de5-a6f1-d86aac6e3676 req-2c751f8c-bbe0-4baf-abba-e99c91ebe50b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.#033[00m
Nov 23 16:11:44 np0005532762 nova_compute[230183]: 2025-11-23 21:11:44.793 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:45 np0005532762 podman[238021]: 2025-11-23 21:11:45.148410135 +0000 UTC m=+0.047844037 container create 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 23 16:11:45 np0005532762 systemd[1]: Started libpod-conmon-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope.
Nov 23 16:11:45 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:11:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0150f09f49afcae45b35871ed00a9581191e83bc7cd591edc409336857fd6c40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532762 podman[238021]: 2025-11-23 21:11:45.122322058 +0000 UTC m=+0.021755990 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:11:45 np0005532762 podman[238021]: 2025-11-23 21:11:45.233045453 +0000 UTC m=+0.132479385 container init 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 16:11:45 np0005532762 podman[238021]: 2025-11-23 21:11:45.238712374 +0000 UTC m=+0.138146276 container start 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:11:45 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : New worker (238068) forked
Nov 23 16:11:45 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : Loading success.
Nov 23 16:11:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.456246038 +0000 UTC m=+0.040560943 container create cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 23 16:11:45 np0005532762 systemd[1]: Started libpod-conmon-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope.
Nov 23 16:11:45 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.438742271 +0000 UTC m=+0.023057196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.536439698 +0000 UTC m=+0.120754623 container init cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.542735505 +0000 UTC m=+0.127050400 container start cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.547980365 +0000 UTC m=+0.132295280 container attach cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:45 np0005532762 systemd[1]: libpod-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope: Deactivated successfully.
Nov 23 16:11:45 np0005532762 relaxed_zhukovsky[238130]: 167 167
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.552011133 +0000 UTC m=+0.136326038 container died cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:11:45 np0005532762 conmon[238130]: conmon cd5f313a7f1114915763 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope/container/memory.events
Nov 23 16:11:45 np0005532762 systemd[1]: var-lib-containers-storage-overlay-b28301d7c9dfa616731ec392365a7b57a1bf37d9fb07d297666f65ae4220d814-merged.mount: Deactivated successfully.
Nov 23 16:11:45 np0005532762 podman[238114]: 2025-11-23 21:11:45.588723672 +0000 UTC m=+0.173038577 container remove cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 23 16:11:45 np0005532762 systemd[1]: libpod-conmon-cd5f313a7f111491576389298826be6755e9af7846e774493a248c5c6c23d943.scope: Deactivated successfully.
Nov 23 16:11:45 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:45Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 16:11:45 np0005532762 ovn_controller[132845]: 2025-11-23T21:11:45Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:9a:cf 10.100.0.23
Nov 23 16:11:45 np0005532762 podman[238155]: 2025-11-23 21:11:45.760607988 +0000 UTC m=+0.037247195 container create 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 23 16:11:45 np0005532762 nova_compute[230183]: 2025-11-23 21:11:45.777 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:11:45 np0005532762 nova_compute[230183]: 2025-11-23 21:11:45.777 230187 DEBUG nova.network.neutron [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:11:45 np0005532762 nova_compute[230183]: 2025-11-23 21:11:45.789 230187 DEBUG oslo_concurrency.lockutils [req-a9897480-a1b3-44d9-9dd5-baa198defd0b req-4d685653-4467-4056-9a73-768d72a6809e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:11:45 np0005532762 systemd[1]: Started libpod-conmon-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope.
Nov 23 16:11:45 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:11:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 16:11:45 np0005532762 podman[238155]: 2025-11-23 21:11:45.822824158 +0000 UTC m=+0.099463375 container init 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 23 16:11:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:45 np0005532762 podman[238155]: 2025-11-23 21:11:45.83188261 +0000 UTC m=+0.108521817 container start 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:11:45 np0005532762 podman[238155]: 2025-11-23 21:11:45.744623852 +0000 UTC m=+0.021263089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 23 16:11:45 np0005532762 podman[238155]: 2025-11-23 21:11:45.842535725 +0000 UTC m=+0.119174962 container attach 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 23 16:11:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]: [
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:    {
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "available": false,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "being_replaced": false,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "ceph_device_lvm": false,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "lsm_data": {},
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "lvs": [],
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "path": "/dev/sr0",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "rejected_reasons": [
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "Insufficient space (<5GB)",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "Has a FileSystem"
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        ],
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        "sys_api": {
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "actuators": null,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "device_nodes": [
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:                "sr0"
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            ],
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "devname": "sr0",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "human_readable_size": "482.00 KB",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "id_bus": "ata",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "model": "QEMU DVD-ROM",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "nr_requests": "2",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "parent": "/dev/sr0",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "partitions": {},
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "path": "/dev/sr0",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "removable": "1",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "rev": "2.5+",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "ro": "0",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "rotational": "1",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "sas_address": "",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "sas_device_handle": "",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "scheduler_mode": "mq-deadline",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "sectors": 0,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "sectorsize": "2048",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "size": 493568.0,
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "support_discard": "2048",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "type": "disk",
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:            "vendor": "QEMU"
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:        }
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]:    }
Nov 23 16:11:46 np0005532762 nifty_diffie[238171]: ]
Nov 23 16:11:46 np0005532762 systemd[1]: libpod-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope: Deactivated successfully.
Nov 23 16:11:46 np0005532762 podman[238155]: 2025-11-23 21:11:46.52224637 +0000 UTC m=+0.798885597 container died 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 23 16:11:46 np0005532762 systemd[1]: var-lib-containers-storage-overlay-38862a5598e4fd0a274f5b5bec77424f08a2cea1ff5d04fa6f150097b5ed51d4-merged.mount: Deactivated successfully.
Nov 23 16:11:46 np0005532762 podman[238155]: 2025-11-23 21:11:46.565576926 +0000 UTC m=+0.842216133 container remove 49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 23 16:11:46 np0005532762 systemd[1]: libpod-conmon-49bcd82e77ba084fa23f2bb0d1b8100a6d664ccdfb6645b1441b2b64a16266d8.scope: Deactivated successfully.
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.875 230187 DEBUG nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.876 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG oslo_concurrency.lockutils [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 DEBUG nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:11:46 np0005532762 nova_compute[230183]: 2025-11-23 21:11:46.877 230187 WARNING nova.compute.manager [req-ac1671b5-1ec7-4774-9b8e-0d920a589520 req-7dc3a174-aa0c-4bc1-b841-c538e80cb0b9 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.#033[00m
Nov 23 16:11:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:47 np0005532762 nova_compute[230183]: 2025-11-23 21:11:47.273 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:47 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:47 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:48.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:48 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:11:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:49 np0005532762 nova_compute[230183]: 2025-11-23 21:11:49.444 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:50.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.067 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:11:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.068 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:11:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:11:51.068 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:11:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:11:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:11:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:52 np0005532762 nova_compute[230183]: 2025-11-23 21:11:52.274 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:52 np0005532762 podman[239397]: 2025-11-23 21:11:52.661035597 +0000 UTC m=+0.077045727 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:11:52 np0005532762 podman[239396]: 2025-11-23 21:11:52.661100199 +0000 UTC m=+0.077001936 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Nov 23 16:11:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:53 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:11:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:54 np0005532762 nova_compute[230183]: 2025-11-23 21:11:54.446 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:55 np0005532762 nova_compute[230183]: 2025-11-23 21:11:55.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:56.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:11:57 np0005532762 nova_compute[230183]: 2025-11-23 21:11:57.276 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:11:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:11:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:11:57 np0005532762 podman[239469]: 2025-11-23 21:11:57.649910434 +0000 UTC m=+0.067697767 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 16:11:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:11:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:11:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:11:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:11:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:11:59 np0005532762 nova_compute[230183]: 2025-11-23 21:11:59.436 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:11:59 np0005532762 nova_compute[230183]: 2025-11-23 21:11:59.449 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:01 np0005532762 nova_compute[230183]: 2025-11-23 21:12:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:01 np0005532762 nova_compute[230183]: 2025-11-23 21:12:01.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:12:01 np0005532762 nova_compute[230183]: 2025-11-23 21:12:01.441 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:12:01 np0005532762 nova_compute[230183]: 2025-11-23 21:12:01.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:01 np0005532762 nova_compute[230183]: 2025-11-23 21:12:01.442 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:12:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.279 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.471 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.472 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:12:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:02.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:12:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1966353478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.924 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.978 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:12:02 np0005532762 nova_compute[230183]: 2025-11-23 21:12:02.978 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.128 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.129 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4728MB free_disk=59.92177200317383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.129 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.130 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.270 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.271 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.271 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.375 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:12:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.433 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.434 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.447 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.470 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.510 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:12:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:12:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3949694408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.919 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.925 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.938 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.965 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:12:03 np0005532762 nova_compute[230183]: 2025-11-23 21:12:03.966 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:04 np0005532762 nova_compute[230183]: 2025-11-23 21:12:04.475 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:04 np0005532762 nova_compute[230183]: 2025-11-23 21:12:04.939 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:04 np0005532762 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:04 np0005532762 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:12:04 np0005532762 nova_compute[230183]: 2025-11-23 21:12:04.940 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:12:05 np0005532762 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:12:05 np0005532762 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:12:05 np0005532762 nova_compute[230183]: 2025-11-23 21:12:05.224 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:12:05 np0005532762 nova_compute[230183]: 2025-11-23 21:12:05.225 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:12:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:06.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.282 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.380 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:12:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.399 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.399 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.400 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.401 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.401 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.402 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:12:07 np0005532762 nova_compute[230183]: 2025-11-23 21:12:07.885 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:08 np0005532762 nova_compute[230183]: 2025-11-23 21:12:08.571 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:12:08 np0005532762 nova_compute[230183]: 2025-11-23 21:12:08.588 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Triggering sync for uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 23 16:12:08 np0005532762 nova_compute[230183]: 2025-11-23 21:12:08.589 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:08 np0005532762 nova_compute[230183]: 2025-11-23 21:12:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:08 np0005532762 nova_compute[230183]: 2025-11-23 21:12:08.624 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:09 np0005532762 nova_compute[230183]: 2025-11-23 21:12:09.478 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:12 np0005532762 nova_compute[230183]: 2025-11-23 21:12:12.286 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:14 np0005532762 nova_compute[230183]: 2025-11-23 21:12:14.482 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:17 np0005532762 nova_compute[230183]: 2025-11-23 21:12:17.287 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:18.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:19 np0005532762 nova_compute[230183]: 2025-11-23 21:12:19.527 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:21 np0005532762 nova_compute[230183]: 2025-11-23 21:12:21.712 230187 DEBUG nova.compute.manager [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:12:21 np0005532762 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG nova.compute.manager [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-9852de9e-899c-4a7c-8268-07fee5003eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:12:21 np0005532762 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:12:21 np0005532762 nova_compute[230183]: 2025-11-23 21:12:21.713 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:12:21 np0005532762 nova_compute[230183]: 2025-11-23 21:12:21.714 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:12:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:22 np0005532762 nova_compute[230183]: 2025-11-23 21:12:22.291 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:23 np0005532762 nova_compute[230183]: 2025-11-23 21:12:23.224 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port 9852de9e-899c-4a7c-8268-07fee5003eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:12:23 np0005532762 nova_compute[230183]: 2025-11-23 21:12:23.225 230187 DEBUG nova.network.neutron [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:12:23 np0005532762 nova_compute[230183]: 2025-11-23 21:12:23.242 230187 DEBUG oslo_concurrency.lockutils [req-4c1f441d-d75a-4227-b72f-8a3849ce6944 req-276eebf3-1b82-4bcd-8e53-9dd2b02a02ef 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:12:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:23 np0005532762 podman[239573]: 2025-11-23 21:12:23.640386346 +0000 UTC m=+0.043578454 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:12:23 np0005532762 podman[239572]: 2025-11-23 21:12:23.698682372 +0000 UTC m=+0.104107350 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:12:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:24 np0005532762 nova_compute[230183]: 2025-11-23 21:12:24.529 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:26.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:27 np0005532762 nova_compute[230183]: 2025-11-23 21:12:27.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:28.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:28 np0005532762 podman[239644]: 2025-11-23 21:12:28.636560328 +0000 UTC m=+0.056589251 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 16:12:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:29 np0005532762 nova_compute[230183]: 2025-11-23 21:12:29.532 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:30.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:32 np0005532762 nova_compute[230183]: 2025-11-23 21:12:32.298 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [WARNING] 326/211232 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 23 16:12:32 np0005532762 ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei[86089]: [ALERT] 326/211232 (4) : backend 'backend' has no server available!
Nov 23 16:12:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:34 np0005532762 nova_compute[230183]: 2025-11-23 21:12:34.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:37 np0005532762 nova_compute[230183]: 2025-11-23 21:12:37.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:39 np0005532762 nova_compute[230183]: 2025-11-23 21:12:39.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:40.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:41.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:42 np0005532762 nova_compute[230183]: 2025-11-23 21:12:42.303 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:42.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:43.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:44.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:44 np0005532762 nova_compute[230183]: 2025-11-23 21:12:44.580 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:45 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 23 16:12:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:47 np0005532762 nova_compute[230183]: 2025-11-23 21:12:47.305 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:49 np0005532762 nova_compute[230183]: 2025-11-23 21:12:49.583 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.069 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:12:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.069 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:12:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:12:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:12:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:51 np0005532762 ovn_controller[132845]: 2025-11-23T21:12:51Z|00086|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 23 16:12:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:52 np0005532762 nova_compute[230183]: 2025-11-23 21:12:52.307 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:12:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:12:54 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:12:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:54 np0005532762 nova_compute[230183]: 2025-11-23 21:12:54.585 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:54 np0005532762 podman[239788]: 2025-11-23 21:12:54.676586044 +0000 UTC m=+0.062914420 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:12:54 np0005532762 podman[239787]: 2025-11-23 21:12:54.699562677 +0000 UTC m=+0.095273513 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:12:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:55.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:12:57 np0005532762 nova_compute[230183]: 2025-11-23 21:12:57.309 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:12:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:57.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:12:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:12:58.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:12:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:12:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:12:59.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:12:59 np0005532762 nova_compute[230183]: 2025-11-23 21:12:59.616 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:12:59 np0005532762 podman[239834]: 2025-11-23 21:12:59.641182082 +0000 UTC m=+0.061693347 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 16:13:00 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:13:00 np0005532762 nova_compute[230183]: 2025-11-23 21:13:00.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:02 np0005532762 nova_compute[230183]: 2025-11-23 21:13:02.312 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.445 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:03.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4016648780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.878 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.959 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:13:03 np0005532762 nova_compute[230183]: 2025-11-23 21:13:03.960 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.104 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4725MB free_disk=59.896949768066406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.105 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.159 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.187 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3454510770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.616 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.622 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.634 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.636 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:13:04 np0005532762 nova_compute[230183]: 2025-11-23 21:13:04.636 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:05.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:05 np0005532762 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:05 np0005532762 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:13:05 np0005532762 nova_compute[230183]: 2025-11-23 21:13:05.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:13:06 np0005532762 nova_compute[230183]: 2025-11-23 21:13:06.022 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:06 np0005532762 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:06 np0005532762 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:13:06 np0005532762 nova_compute[230183]: 2025-11-23 21:13:06.023 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:06 np0005532762 nova_compute[230183]: 2025-11-23 21:13:06.488 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:06.489 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:06.490 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:13:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:06.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:07 np0005532762 nova_compute[230183]: 2025-11-23 21:13:07.314 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.826 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.839 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.839 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.840 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.841 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:13:08 np0005532762 nova_compute[230183]: 2025-11-23 21:13:08.841 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:13:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.620 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.754 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.754 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.765 230187 DEBUG nova.objects.instance [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.781 230187 DEBUG nova.virt.libvirt.vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.781 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.782 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.784 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.786 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.787 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Attempting to detach device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.788 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <target dev="tap9852de9e-89"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.795 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.797 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <name>instance-00000006</name>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='tapbdbb1df8-a0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:1a:9a:cf'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='tap9852de9e-89'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='net1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.799 230187 INFO nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the persistent domain config.#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.799 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] (1/8): Attempting to detach device tap9852de9e-89 with device alias net1 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.800 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] detach device xml: <interface type="ethernet">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <mac address="fa:16:3e:1a:9a:cf"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <model type="virtio"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <mtu size="1442"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <target dev="tap9852de9e-89"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </interface>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 16:13:09 np0005532762 kernel: tap9852de9e-89 (unregistering): left promiscuous mode
Nov 23 16:13:09 np0005532762 NetworkManager[49021]: <info>  [1763932389.8437] device (tap9852de9e-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:13:09 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:09Z|00087|binding|INFO|Releasing lport 9852de9e-899c-4a7c-8268-07fee5003eac from this chassis (sb_readonly=0)
Nov 23 16:13:09 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:09Z|00088|binding|INFO|Setting lport 9852de9e-899c-4a7c-8268-07fee5003eac down in Southbound
Nov 23 16:13:09 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:09Z|00089|binding|INFO|Removing iface tap9852de9e-89 ovn-installed in OVS
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.853 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.859 230187 DEBUG nova.virt.libvirt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Received event <DeviceRemovedEvent: 1763932389.8586385, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 23 16:13:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.859 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:9a:cf 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22c132b-3565-4344-9558-f1d93c19cb57, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=9852de9e-899c-4a7c-8268-07fee5003eac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.860 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 9852de9e-899c-4a7c-8268-07fee5003eac in datapath a53cafa8-a74e-467c-9117-a31bd6c650ae unbound from our chassis#033[00m
Nov 23 16:13:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.861 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a53cafa8-a74e-467c-9117-a31bd6c650ae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.861 230187 DEBUG nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Start waiting for the detach event from libvirt for device tap9852de9e-89 with device alias net1 for instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.862 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.862 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8815a5-afc1-4833-b18e-45c901274652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:09 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:09.863 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae namespace which is not needed anymore#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.865 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <name>instance-00000006</name>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:11:44</nova:creationTime>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:port uuid="9852de9e-899c-4a7c-8268-07fee5003eac">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target dev='tapbdbb1df8-a0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.865 230187 INFO nova.virt.libvirt.driver [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully detached device tap9852de9e-89 from instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 from the live domain config.#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.866 230187 DEBUG nova.virt.libvirt.vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.866 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.867 230187 DEBUG nova.network.os_vif_util [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.868 230187 DEBUG os_vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.869 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.870 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9852de9e-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.871 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.877 230187 INFO os_vif [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')#033[00m
Nov 23 16:13:09 np0005532762 nova_compute[230183]: 2025-11-23 21:13:09.878 230187 DEBUG nova.virt.libvirt.guest [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:09 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:09 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:09 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : haproxy version is 2.8.14-c23fe91
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [NOTICE]   (238065) : path to executable is /usr/sbin/haproxy
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : Exiting Master process...
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : Exiting Master process...
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [ALERT]    (238065) : Current worker (238068) exited with code 143 (Terminated)
Nov 23 16:13:09 np0005532762 neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae[238060]: [WARNING]  (238065) : All workers exited. Exiting... (0)
Nov 23 16:13:09 np0005532762 systemd[1]: libpod-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope: Deactivated successfully.
Nov 23 16:13:09 np0005532762 podman[239976]: 2025-11-23 21:13:09.993936951 +0000 UTC m=+0.040971284 container died 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 16:13:10 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed-userdata-shm.mount: Deactivated successfully.
Nov 23 16:13:10 np0005532762 systemd[1]: var-lib-containers-storage-overlay-0150f09f49afcae45b35871ed00a9581191e83bc7cd591edc409336857fd6c40-merged.mount: Deactivated successfully.
Nov 23 16:13:10 np0005532762 podman[239976]: 2025-11-23 21:13:10.030490657 +0000 UTC m=+0.077524990 container cleanup 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:13:10 np0005532762 systemd[1]: libpod-conmon-9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed.scope: Deactivated successfully.
Nov 23 16:13:10 np0005532762 podman[240004]: 2025-11-23 21:13:10.08381275 +0000 UTC m=+0.035828587 container remove 9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.088 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c5193350-1924-4069-ad7e-0087cd184cde]: (4, ('Sun Nov 23 09:13:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae (9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed)\n9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed\nSun Nov 23 09:13:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae (9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed)\n9dd79f3e519c65bbc0e5dce6d36ef4f64107f7bc93e1476f17856b8c412695ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.089 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2c24f3-6e85-4b4f-ae79-7adf7a6dcab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.090 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa53cafa8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.092 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:10 np0005532762 kernel: tapa53cafa8-a0: left promiscuous mode
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.104 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.105 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.107 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[42060850-8a1f-4311-89a1-6a6c172921fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.125 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b3abcc47-2dcf-46f6-bfe1-3cd150ed3c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.126 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[78799e3f-9a5f-4bcd-9064-2755d62367a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.138 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a74401e5-b9eb-432a-b748-4a957b63438f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425314, 'reachable_time': 31134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240022, 'error': None, 'target': 'ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.140 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a53cafa8-a74e-467c-9117-a31bd6c650ae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:13:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:10.140 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[21921e47-ff0e-4ebc-86fe-3521eb38b1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:10 np0005532762 systemd[1]: run-netns-ovnmeta\x2da53cafa8\x2da74e\x2d467c\x2d9117\x2da31bd6c650ae.mount: Deactivated successfully.
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.418 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.418 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.420 230187 DEBUG nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:13:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.610 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.610 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.611 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 WARNING nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-unplugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.612 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG oslo_concurrency.lockutils [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.613 230187 WARNING nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-9852de9e-899c-4a7c-8268-07fee5003eac for instance with vm_state active and task_state None.#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 DEBUG nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-deleted-9852de9e-899c-4a7c-8268-07fee5003eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 INFO nova.compute.manager [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Neutron deleted interface 9852de9e-899c-4a7c-8268-07fee5003eac; detaching it from the instance and deleting it from the info cache#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.614 230187 DEBUG nova.network.neutron [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.639 230187 DEBUG nova.objects.instance [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'system_metadata' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.659 230187 DEBUG nova.objects.instance [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lazy-loading 'flavor' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.676 230187 DEBUG nova.virt.libvirt.vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.676 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.678 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.683 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.688 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <name>instance-00000006</name>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='tapbdbb1df8-a0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.689 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.693 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1a:9a:cf"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9852de9e-89"/></interface>not found in domain: <domain type='kvm' id='4'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <name>instance-00000006</name>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <uuid>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</uuid>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:13:09</nova:creationTime>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <memory unit='KiB'>131072</memory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <vcpu placement='static'>1</vcpu>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <resource>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <partition>/machine</partition>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </resource>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <sysinfo type='smbios'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='manufacturer'>RDO</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='product'>OpenStack Compute</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='serial'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='uuid'>4bac23b8-7bcd-4f5e-89a8-b035a16ffe36</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <entry name='family'>Virtual Machine</entry>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <boot dev='hd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <smbios mode='sysinfo'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <vmcoreinfo state='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <cpu mode='custom' match='exact' check='full'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <vendor>AMD</vendor>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='x2apic'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc-deadline'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='hypervisor'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='tsc_adjust'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='spec-ctrl'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='stibp'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='cmp_legacy'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='overflow-recov'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='succor'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='ibrs'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='amd-ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='virt-ssbd'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='lbrv'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='tsc-scale'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='vmcb-clean'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='flushbyasid'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pause-filter'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='pfthreshold'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='xsaves'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='svm'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='require' name='topoext'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='npt'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <feature policy='disable' name='nrip-save'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <clock offset='utc'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='pit' tickpolicy='delay'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <timer name='hpet' present='no'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_poweroff>destroy</on_poweroff>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_reboot>restart</on_reboot>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <on_crash>destroy</on_crash>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <disk type='network' device='disk'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk' index='2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='vda' bus='virtio'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='virtio-disk0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <disk type='network' device='cdrom'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='qemu' type='raw' cache='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <auth username='openstack'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <secret type='ceph' uuid='03808be8-ae4a-5548-82e6-4a294f1bc627'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source protocol='rbd' name='vms/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_disk.config' index='1'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.100' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.102' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <host name='192.168.122.101' port='6789'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='sda' bus='sata'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <readonly/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='sata0-0-0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='0' model='pcie-root'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pcie.0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='1' port='0x10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='2' port='0x11'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='3' port='0x12'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='4' port='0x13'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='5' port='0x14'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='6' port='0x15'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='7' port='0x16'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='8' port='0x17'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.8'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='9' port='0x18'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.9'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='10' port='0x19'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='11' port='0x1a'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.11'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='12' port='0x1b'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.12'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='13' port='0x1c'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.13'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='14' port='0x1d'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.14'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='15' port='0x1e'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.15'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='16' port='0x1f'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.16'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='17' port='0x20'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.17'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='18' port='0x21'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.18'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='19' port='0x22'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.19'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='20' port='0x23'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.20'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='21' port='0x24'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.21'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='22' port='0x25'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.22'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='23' port='0x26'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.23'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='24' port='0x27'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.24'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-root-port'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target chassis='25' port='0x28'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.25'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model name='pcie-pci-bridge'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='pci.26'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='usb'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <controller type='sata' index='0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='ide'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </controller>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <interface type='ethernet'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <mac address='fa:16:3e:f3:c9:f4'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target dev='tapbdbb1df8-a0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model type='virtio'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <driver name='vhost' rx_queue_size='512'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <mtu size='1442'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='net0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <serial type='pty'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target type='isa-serial' port='0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:        <model name='isa-serial'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      </target>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <console type='pty' tty='/dev/pts/0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <source path='/dev/pts/0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <log file='/var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36/console.log' append='off'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <target type='serial' port='0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='serial0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </console>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='tablet' bus='usb'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='usb' bus='0' port='1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='mouse' bus='ps2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input1'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <input type='keyboard' bus='ps2'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='input2'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </input>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <listen type='address' address='::0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </graphics>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <audio id='1' type='none'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <model type='virtio' heads='1' primary='yes'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='video0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <watchdog model='itco' action='reset'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='watchdog0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </watchdog>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <memballoon model='virtio'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <stats period='10'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='balloon0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <rng model='virtio'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <backend model='random'>/dev/urandom</backend>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <alias name='rng0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <label>system_u:system_r:svirt_t:s0:c536,c844</label>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c536,c844</imagelabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <label>+107:+107</label>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <imagelabel>+107:+107</imagelabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </seclabel>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.694 230187 WARNING nova.virt.libvirt.driver [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Detaching interface fa:16:3e:1a:9a:cf failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9852de9e-89' not found.#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.695 230187 DEBUG nova.virt.libvirt.vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.696 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converting VIF {"id": "9852de9e-899c-4a7c-8268-07fee5003eac", "address": "fa:16:3e:1a:9a:cf", "network": {"id": "a53cafa8-a74e-467c-9117-a31bd6c650ae", "bridge": "br-int", "label": "tempest-network-smoke--511994107", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9852de9e-89", "ovs_interfaceid": "9852de9e-899c-4a7c-8268-07fee5003eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.697 230187 DEBUG nova.network.os_vif_util [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.697 230187 DEBUG os_vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.700 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.700 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9852de9e-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.701 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.703 230187 INFO os_vif [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:9a:cf,bridge_name='br-int',has_traffic_filtering=True,id=9852de9e-899c-4a7c-8268-07fee5003eac,network=Network(a53cafa8-a74e-467c-9117-a31bd6c650ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9852de9e-89')#033[00m
Nov 23 16:13:10 np0005532762 nova_compute[230183]: 2025-11-23 21:13:10.704 230187 DEBUG nova.virt.libvirt.guest [req-fdabdfae-f2f9-4060-8965-e2a3cc44a60d req-c3daf592-1315-4050-a836-eb332ade3c0b 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:name>tempest-TestNetworkBasicOps-server-1210792474</nova:name>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:creationTime>2025-11-23 21:13:10</nova:creationTime>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:flavor name="m1.nano">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:memory>128</nova:memory>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:disk>1</nova:disk>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:swap>0</nova:swap>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:flavor>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:owner>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  <nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    <nova:port uuid="bdbb1df8-a028-4685-9661-24563619eb80">
Nov 23 16:13:10 np0005532762 nova_compute[230183]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:    </nova:port>
Nov 23 16:13:10 np0005532762 nova_compute[230183]:  </nova:ports>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: </nova:instance>
Nov 23 16:13:10 np0005532762 nova_compute[230183]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 23 16:13:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:11 np0005532762 nova_compute[230183]: 2025-11-23 21:13:11.653 230187 INFO nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Port 9852de9e-899c-4a7c-8268-07fee5003eac from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 23 16:13:11 np0005532762 nova_compute[230183]: 2025-11-23 21:13:11.654 230187 DEBUG nova.network.neutron [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:11 np0005532762 nova_compute[230183]: 2025-11-23 21:13:11.669 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:11 np0005532762 nova_compute[230183]: 2025-11-23 21:13:11.687 230187 DEBUG oslo_concurrency.lockutils [None req-8ecf62af-4e46-4236-860a-e89559a2e7c1 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "interface-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-9852de9e-899c-4a7c-8268-07fee5003eac" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:11 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:11Z|00090|binding|INFO|Releasing lport 882afaa1-9000-493d-808e-b1d906b6e642 from this chassis (sb_readonly=0)
Nov 23 16:13:11 np0005532762 nova_compute[230183]: 2025-11-23 21:13:11.854 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.277 230187 DEBUG nova.compute.manager [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-changed-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG nova.compute.manager [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing instance network info cache due to event network-changed-bdbb1df8-a028-4685-9661-24563619eb80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.278 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Refreshing network info cache for port bdbb1df8-a028-4685-9661-24563619eb80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.317 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.367 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.368 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.369 230187 INFO nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Terminating instance#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.370 230187 DEBUG nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:13:12 np0005532762 kernel: tapbdbb1df8-a0 (unregistering): left promiscuous mode
Nov 23 16:13:12 np0005532762 NetworkManager[49021]: <info>  [1763932392.4220] device (tapbdbb1df8-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.428 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:12Z|00091|binding|INFO|Releasing lport bdbb1df8-a028-4685-9661-24563619eb80 from this chassis (sb_readonly=0)
Nov 23 16:13:12 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:12Z|00092|binding|INFO|Setting lport bdbb1df8-a028-4685-9661-24563619eb80 down in Southbound
Nov 23 16:13:12 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:12Z|00093|binding|INFO|Removing iface tapbdbb1df8-a0 ovn-installed in OVS
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.429 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.439 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c9:f4 10.100.0.12'], port_security=['fa:16:3e:f3:c9:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4bac23b8-7bcd-4f5e-89a8-b035a16ffe36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa502c12-d22c-490c-942b-57c2b1624866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30b87ecc-e7bf-46f1-a605-8bcfe0ecba45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8207d226-2b2e-4ad5-9d7b-3777cdc61652, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=bdbb1df8-a028-4685-9661-24563619eb80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.440 142158 INFO neutron.agent.ovn.metadata.agent [-] Port bdbb1df8-a028-4685-9661-24563619eb80 in datapath aa502c12-d22c-490c-942b-57c2b1624866 unbound from our chassis#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.441 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa502c12-d22c-490c-942b-57c2b1624866, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.442 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[28a61e05-1b76-4e31-bc53-b5799b96bbc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.442 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 namespace which is not needed anymore#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.447 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 23 16:13:12 np0005532762 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 18.410s CPU time.
Nov 23 16:13:12 np0005532762 systemd-machined[193469]: Machine qemu-4-instance-00000006 terminated.
Nov 23 16:13:12 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : haproxy version is 2.8.14-c23fe91
Nov 23 16:13:12 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [NOTICE]   (237704) : path to executable is /usr/sbin/haproxy
Nov 23 16:13:12 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [WARNING]  (237704) : Exiting Master process...
Nov 23 16:13:12 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [ALERT]    (237704) : Current worker (237706) exited with code 143 (Terminated)
Nov 23 16:13:12 np0005532762 neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866[237700]: [WARNING]  (237704) : All workers exited. Exiting... (0)
Nov 23 16:13:12 np0005532762 systemd[1]: libpod-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope: Deactivated successfully.
Nov 23 16:13:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:12.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:12 np0005532762 podman[240046]: 2025-11-23 21:13:12.556299917 +0000 UTC m=+0.039925227 container died ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:13:12 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600-userdata-shm.mount: Deactivated successfully.
Nov 23 16:13:12 np0005532762 systemd[1]: var-lib-containers-storage-overlay-1d0c50b76b192f5ce5a4fd663eee6064b85b526a900eeee678e7ce0a629a71ae-merged.mount: Deactivated successfully.
Nov 23 16:13:12 np0005532762 NetworkManager[49021]: <info>  [1763932392.5899] manager: (tapbdbb1df8-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.591 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 podman[240046]: 2025-11-23 21:13:12.593626632 +0000 UTC m=+0.077251942 container cleanup ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.596 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 systemd[1]: libpod-conmon-ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600.scope: Deactivated successfully.
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.605 230187 INFO nova.virt.libvirt.driver [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Instance destroyed successfully.#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.605 230187 DEBUG nova.objects.instance [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.627 230187 DEBUG nova.virt.libvirt.vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:11:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1210792474',display_name='tempest-TestNetworkBasicOps-server-1210792474',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1210792474',id=6,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC9I5o3FOJoMlLS5RVHvg4JB6VMA0TLpRAHrRWOuj73hgQ5knZWkP8wznWff+IF5v3eA9GQgz9kKnWlcz54pfIskwjEMQ8tpar2NP2dJjbFuASygJ+AuXJaTUib24SH0fw==',key_name='tempest-TestNetworkBasicOps-192906804',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:11:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-jk4nm00m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:11:16Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4bac23b8-7bcd-4f5e-89a8-b035a16ffe36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.628 230187 DEBUG nova.network.os_vif_util [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.629 230187 DEBUG nova.network.os_vif_util [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.629 230187 DEBUG os_vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.630 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.631 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdbb1df8-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.632 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.637 230187 INFO os_vif [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:c9:f4,bridge_name='br-int',has_traffic_filtering=True,id=bdbb1df8-a028-4685-9661-24563619eb80,network=Network(aa502c12-d22c-490c-942b-57c2b1624866),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdbb1df8-a0')#033[00m
Nov 23 16:13:12 np0005532762 podman[240087]: 2025-11-23 21:13:12.656800397 +0000 UTC m=+0.037880401 container remove ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.663 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[379bbf24-ac77-4378-86b6-7769de929ff6]: (4, ('Sun Nov 23 09:13:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 (ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600)\nae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600\nSun Nov 23 09:13:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 (ae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600)\nae0b27a770ef2cf43f82c68adc6365354c06936a9cd7f93f4e5ec82be240a600\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.665 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5d56225d-22df-4120-8b37-4b556c7dc6cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.665 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa502c12-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.667 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 kernel: tapaa502c12-d0: left promiscuous mode
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.685 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.688 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[230dd5e7-80ba-4212-88aa-54ab457a62e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.700 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[15bad5e6-45fb-4918-8ff4-c39103d409fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.701 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ccf312-72ec-4b61-9208-b5159d253fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.723 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[723dfb59-2fbf-4c8c-a12f-8b12eccc23ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 422441, 'reachable_time': 18666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240119, 'error': None, 'target': 'ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.726 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa502c12-d22c-490c-942b-57c2b1624866 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:13:12 np0005532762 systemd[1]: run-netns-ovnmeta\x2daa502c12\x2dd22c\x2d490c\x2d942b\x2d57c2b1624866.mount: Deactivated successfully.
Nov 23 16:13:12 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:12.726 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[1a29f803-9703-438b-9300-0a7f6d1c7ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.846 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.846 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG oslo_concurrency.lockutils [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:12 np0005532762 nova_compute[230183]: 2025-11-23 21:13:12.847 230187 DEBUG nova.compute.manager [req-2f8f9936-3ef5-4c95-94a8-748eca89f5d2 req-cd71013f-cf0b-4457-ba28-f3f77d839aa4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-unplugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.081 230187 INFO nova.virt.libvirt.driver [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deleting instance files /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_del#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.082 230187 INFO nova.virt.libvirt.driver [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deletion of /var/lib/nova/instances/4bac23b8-7bcd-4f5e-89a8-b035a16ffe36_del complete#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.154 230187 INFO nova.compute.manager [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG oslo.service.loopingcall [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:13:13 np0005532762 nova_compute[230183]: 2025-11-23 21:13:13.155 230187 DEBUG nova.network.neutron [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:13:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:14 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:14.492 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.606 230187 DEBUG nova.network.neutron [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.630 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updated VIF entry in instance network info cache for port bdbb1df8-a028-4685-9661-24563619eb80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.630 230187 DEBUG nova.network.neutron [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [{"id": "bdbb1df8-a028-4685-9661-24563619eb80", "address": "fa:16:3e:f3:c9:f4", "network": {"id": "aa502c12-d22c-490c-942b-57c2b1624866", "bridge": "br-int", "label": "tempest-network-smoke--330338944", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdbb1df8-a0", "ovs_interfaceid": "bdbb1df8-a028-4685-9661-24563619eb80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.656 230187 INFO nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Took 1.50 seconds to deallocate network for instance.#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.683 230187 DEBUG oslo_concurrency.lockutils [req-78cc67d0-1bfd-47af-9959-6521b64f48e6 req-b2d7ff5b-2306-4817-8d26-d05593c96ee6 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.724 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.725 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.746 230187 DEBUG nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-deleted-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.747 230187 INFO nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Neutron deleted interface bdbb1df8-a028-4685-9661-24563619eb80; detaching it from the instance and deleting it from the info cache#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.747 230187 DEBUG nova.network.neutron [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.769 230187 DEBUG nova.compute.manager [req-aa11cde4-66ee-425e-8d26-5056ed623d73 req-fa81abc1-0598-4bc3-898c-35bd659f2ac8 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Detach interface failed, port_id=bdbb1df8-a028-4685-9661-24563619eb80, reason: Instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.771 230187 DEBUG oslo_concurrency.processutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.948 230187 DEBUG nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG oslo_concurrency.lockutils [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 DEBUG nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] No waiting events found dispatching network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:14 np0005532762 nova_compute[230183]: 2025-11-23 21:13:14.949 230187 WARNING nova.compute.manager [req-a54300cf-3307-45c3-aba2-70eb4c9aea59 req-b828c63c-a5e2-4b69-8ec8-759ec285e08e 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Received unexpected event network-vif-plugged-bdbb1df8-a028-4685-9661-24563619eb80 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:13:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:15 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3698391352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.204 230187 DEBUG oslo_concurrency.processutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.209 230187 DEBUG nova.compute.provider_tree [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.224 230187 DEBUG nova.scheduler.client.report [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.249 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.277 230187 INFO nova.scheduler.client.report [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36#033[00m
Nov 23 16:13:15 np0005532762 nova_compute[230183]: 2025-11-23 21:13:15.369 230187 DEBUG oslo_concurrency.lockutils [None req-47675623-9918-4bd3-9aa8-83356dd89f88 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4bac23b8-7bcd-4f5e-89a8-b035a16ffe36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:15 np0005532762 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 16:13:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:17 np0005532762 nova_compute[230183]: 2025-11-23 21:13:17.318 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:17 np0005532762 nova_compute[230183]: 2025-11-23 21:13:17.632 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:18.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:18 np0005532762 nova_compute[230183]: 2025-11-23 21:13:18.759 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:18 np0005532762 nova_compute[230183]: 2025-11-23 21:13:18.829 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:22 np0005532762 nova_compute[230183]: 2025-11-23 21:13:22.320 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:22 np0005532762 nova_compute[230183]: 2025-11-23 21:13:22.634 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:23.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:25.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:25 np0005532762 podman[240155]: 2025-11-23 21:13:25.654485065 +0000 UTC m=+0.061344237 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 16:13:25 np0005532762 podman[240154]: 2025-11-23 21:13:25.691626197 +0000 UTC m=+0.093621299 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:13:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:27 np0005532762 nova_compute[230183]: 2025-11-23 21:13:27.322 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:27.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:27 np0005532762 nova_compute[230183]: 2025-11-23 21:13:27.603 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932392.6023915, 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:13:27 np0005532762 nova_compute[230183]: 2025-11-23 21:13:27.603 230187 INFO nova.compute.manager [-] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:13:27 np0005532762 nova_compute[230183]: 2025-11-23 21:13:27.619 230187 DEBUG nova.compute.manager [None req-2669e519-7d87-4110-945c-4c498466f9bf - - - - - -] [instance: 4bac23b8-7bcd-4f5e-89a8-b035a16ffe36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:27 np0005532762 nova_compute[230183]: 2025-11-23 21:13:27.675 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:28.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:29.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:30.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:30 np0005532762 podman[240225]: 2025-11-23 21:13:30.641766549 +0000 UTC m=+0.055315657 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:13:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:31.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:32 np0005532762 nova_compute[230183]: 2025-11-23 21:13:32.324 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:32 np0005532762 nova_compute[230183]: 2025-11-23 21:13:32.676 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.727 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.727 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.739 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.815 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.816 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.822 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.822 230187 INFO nova.compute.claims [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:13:33 np0005532762 nova_compute[230183]: 2025-11-23 21:13:33.903 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:34 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:34 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1372269286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.349 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.360 230187 DEBUG nova.compute.provider_tree [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.380 230187 DEBUG nova.scheduler.client.report [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.405 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.406 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.453 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.454 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.486 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.506 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:13:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.607 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.609 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.610 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating image(s)#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.642 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.672 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.703 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.707 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.725 230187 DEBUG nova.policy [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.761 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.761 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.762 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.762 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.788 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:34 np0005532762 nova_compute[230183]: 2025-11-23 21:13:34.791 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c73efbfb-509e-4eb2-af63-a65ba0f98094_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.084 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c73efbfb-509e-4eb2-af63-a65ba0f98094_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.148 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.257 230187 DEBUG nova.objects.instance [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.279 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.280 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Ensure instance console log exists: /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.280 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.281 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:35 np0005532762 nova_compute[230183]: 2025-11-23 21:13:35.281 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:35.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.089 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Successfully updated port: ba818b19-9f72-4242-b9d9-b1630b5d1f24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.101 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.170 230187 DEBUG nova.compute.manager [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.171 230187 DEBUG nova.compute.manager [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing instance network info cache due to event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.171 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.242 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:13:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.810 230187 DEBUG nova.network.neutron [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.834 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance network_info: |[{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.835 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.839 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start _get_guest_xml network_info=[{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.844 230187 WARNING nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.852 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.853 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.857 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.857 230187 DEBUG nova.virt.libvirt.host [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.858 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.858 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.859 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.860 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.861 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.861 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.862 230187 DEBUG nova.virt.hardware [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:13:36 np0005532762 nova_compute[230183]: 2025-11-23 21:13:36.865 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:13:37 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2402362312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.326 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.336 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.357 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.360 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:37.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.677 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:13:37 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/980794617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.783 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.785 230187 DEBUG nova.virt.libvirt.vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:13:34Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.785 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.786 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.788 230187 DEBUG nova.objects.instance [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.800 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <uuid>c73efbfb-509e-4eb2-af63-a65ba0f98094</uuid>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <name>instance-00000008</name>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-1142109245</nova:name>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:13:36</nova:creationTime>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <nova:port uuid="ba818b19-9f72-4242-b9d9-b1630b5d1f24">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="serial">c73efbfb-509e-4eb2-af63-a65ba0f98094</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="uuid">c73efbfb-509e-4eb2-af63-a65ba0f98094</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/c73efbfb-509e-4eb2-af63-a65ba0f98094_disk">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:0d:e6:fe"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <target dev="tapba818b19-9f"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/console.log" append="off"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:13:37 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:13:37 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:13:37 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:13:37 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Preparing to wait for external event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.803 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.804 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.804 230187 DEBUG nova.virt.libvirt.vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:13:34Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.805 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.805 230187 DEBUG nova.network.os_vif_util [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.806 230187 DEBUG os_vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.807 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.808 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.811 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.811 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba818b19-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.812 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba818b19-9f, col_values=(('external_ids', {'iface-id': 'ba818b19-9f72-4242-b9d9-b1630b5d1f24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:e6:fe', 'vm-uuid': 'c73efbfb-509e-4eb2-af63-a65ba0f98094'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.814 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 NetworkManager[49021]: <info>  [1763932417.8150] manager: (tapba818b19-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.815 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.822 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.823 230187 INFO os_vif [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f')#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.884 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.885 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.885 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:0d:e6:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.886 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Using config drive#033[00m
Nov 23 16:13:37 np0005532762 nova_compute[230183]: 2025-11-23 21:13:37.916 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.267 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updated VIF entry in instance network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.268 230187 DEBUG nova.network.neutron [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.283 230187 DEBUG oslo_concurrency.lockutils [req-f5a0c52a-c69c-4e3a-a78e-133263309b78 req-69cf13fb-61d4-4a1e-991b-15f3ef31b11d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.392 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Creating config drive at /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.397 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpnhd6l3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.522 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpnhd6l3" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.560 230187 DEBUG nova.storage.rbd_utils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:13:38 np0005532762 nova_compute[230183]: 2025-11-23 21:13:38.564 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.119 230187 DEBUG oslo_concurrency.processutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config c73efbfb-509e-4eb2-af63-a65ba0f98094_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.121 230187 INFO nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deleting local config drive /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094/disk.config because it was imported into RBD.#033[00m
Nov 23 16:13:39 np0005532762 systemd[1]: Starting libvirt secret daemon...
Nov 23 16:13:39 np0005532762 systemd[1]: Started libvirt secret daemon.
Nov 23 16:13:39 np0005532762 kernel: tapba818b19-9f: entered promiscuous mode
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.231 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:39Z|00094|binding|INFO|Claiming lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 for this chassis.
Nov 23 16:13:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:39Z|00095|binding|INFO|ba818b19-9f72-4242-b9d9-b1630b5d1f24: Claiming fa:16:3e:0d:e6:fe 10.100.0.12
Nov 23 16:13:39 np0005532762 NetworkManager[49021]: <info>  [1763932419.2333] manager: (tapba818b19-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.237 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.240 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.246 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.258 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e6:fe 10.100.0.12'], port_security=['fa:16:3e:0d:e6:fe 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c73efbfb-509e-4eb2-af63-a65ba0f98094', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc3d174-1770-40d5-b0cb-7f310bc5e484, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=ba818b19-9f72-4242-b9d9-b1630b5d1f24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.259 142158 INFO neutron.agent.ovn.metadata.agent [-] Port ba818b19-9f72-4242-b9d9-b1630b5d1f24 in datapath fd64d126-bc30-4f96-8737-9a4b1cf2fe8a bound to our chassis#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.260 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd64d126-bc30-4f96-8737-9a4b1cf2fe8a#033[00m
Nov 23 16:13:39 np0005532762 systemd-machined[193469]: New machine qemu-5-instance-00000008.
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.277 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0e051fbe-e9dc-4bd5-8573-04a10d75a747]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.278 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd64d126-b1 in ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.280 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd64d126-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.280 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[380f25ae-9888-45cd-b83d-8df1b27d6183]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.281 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[1273177f-1a93-498b-b9ae-ff1673cb6221]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.297 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0144d959-b2cb-40e6-ba43-74e0e93e2237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:39Z|00096|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 ovn-installed in OVS
Nov 23 16:13:39 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:39Z|00097|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 up in Southbound
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:39 np0005532762 systemd[1]: Started Virtual Machine qemu-5-instance-00000008.
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.324 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[531c1a0b-f1d6-479a-8f42-3b9e5d5238a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.352 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[abadc56e-bd4a-41e5-9cfc-3364bdbe713e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 NetworkManager[49021]: <info>  [1763932419.3609] manager: (tapfd64d126-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.358 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2de9d64e-d55d-4097-b6df-eeee03a6a59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 systemd-udevd[240600]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:13:39 np0005532762 systemd-udevd[240598]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:13:39 np0005532762 NetworkManager[49021]: <info>  [1763932419.3807] device (tapba818b19-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:13:39 np0005532762 NetworkManager[49021]: <info>  [1763932419.3835] device (tapba818b19-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.395 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[6135d58b-fd8d-4f6b-b8ed-c1081d4d6422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.398 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[7723041d-7d86-45b6-96d4-af5f0927ba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.434 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[df798297-f675-40e2-a747-26c2dfb4209a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.457 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[242806cb-89cb-4331-9b63-4254eb2d8c86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd64d126-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c5:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436800, 'reachable_time': 35556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240625, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.474 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a125dcde-24cf-477c-84bb-b718fd2b7c8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:c5fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436800, 'tstamp': 436800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240626, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.475 230187 DEBUG nova.compute.manager [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.476 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.477 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.478 230187 DEBUG oslo_concurrency.lockutils [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:39 np0005532762 nova_compute[230183]: 2025-11-23 21:13:39.478 230187 DEBUG nova.compute.manager [req-5d5cce07-7127-4145-a7a9-7f0813cf62ce req-248afa77-1aea-4d95-846d-e4e4843dfd15 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Processing event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:13:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:39 np0005532762 NetworkManager[49021]: <info>  [1763932419.9641] device (tapfd64d126-b0): carrier: link connected
Nov 23 16:13:39 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:39.979 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[376e4596-ddc4-427c-a5c0-b83fde40941c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd64d126-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:c5:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436800, 'reachable_time': 35556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240628, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.020 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4b53aa-2b9a-4228-89cc-66687a4a8460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.076 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4db8a726-fdb7-4cf9-aa5e-5d95b6053bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.077 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd64d126-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.077 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.078 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd64d126-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.079 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:40 np0005532762 kernel: tapfd64d126-b0: entered promiscuous mode
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.082 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:40 np0005532762 NetworkManager[49021]: <info>  [1763932420.0838] manager: (tapfd64d126-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.083 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd64d126-b0, col_values=(('external_ids', {'iface-id': '6ab19126-935d-4e09-a163-fbca05fb1c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.084 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:40 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:40Z|00098|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.085 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.086 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.087 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84d008-22a6-44bf-8806-627cc78a04fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.088 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.pid.haproxy
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID fd64d126-bc30-4f96-8737-9a4b1cf2fe8a
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:13:40 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:40.089 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'env', 'PROCESS_TAG=haproxy-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd64d126-bc30-4f96-8737-9a4b1cf2fe8a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.104 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.281 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.2803879, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.282 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Started (Lifecycle Event)#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.285 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.288 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.292 230187 INFO nova.virt.libvirt.driver [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance spawned successfully.#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.292 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.306 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.311 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.315 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.315 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.316 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.317 230187 DEBUG nova.virt.libvirt.driver [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.337 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.337 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.2815745, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.338 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.357 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.360 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932420.287681, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.360 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.367 230187 INFO nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 5.76 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.367 230187 DEBUG nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.375 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.378 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.407 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.425 230187 INFO nova.compute.manager [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 6.64 seconds to build instance.#033[00m
Nov 23 16:13:40 np0005532762 nova_compute[230183]: 2025-11-23 21:13:40.437 230187 DEBUG oslo_concurrency.lockutils [None req-1c64b8ca-12b7-4524-9508-e60d57d02058 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:40 np0005532762 podman[240700]: 2025-11-23 21:13:40.461708602 +0000 UTC m=+0.057271509 container create aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 16:13:40 np0005532762 systemd[1]: Started libpod-conmon-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope.
Nov 23 16:13:40 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:13:40 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3f4166f87894fa9c10f5c59d20e17390aa56344dd4fc7f9dd0c49158d6a062/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:13:40 np0005532762 podman[240700]: 2025-11-23 21:13:40.435193855 +0000 UTC m=+0.030756782 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:13:40 np0005532762 podman[240700]: 2025-11-23 21:13:40.540739571 +0000 UTC m=+0.136302508 container init aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 16:13:40 np0005532762 podman[240700]: 2025-11-23 21:13:40.545394575 +0000 UTC m=+0.140957482 container start aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:13:40 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : New worker (240721) forked
Nov 23 16:13:40 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : Loading success.
Nov 23 16:13:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.612778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420612818, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1761, "num_deletes": 257, "total_data_size": 4531538, "memory_usage": 4623792, "flush_reason": "Manual Compaction"}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420631136, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2940573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28502, "largest_seqno": 30258, "table_properties": {"data_size": 2933288, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15040, "raw_average_key_size": 19, "raw_value_size": 2918711, "raw_average_value_size": 3790, "num_data_blocks": 186, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932273, "oldest_key_time": 1763932273, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 18400 microseconds, and 5433 cpu microseconds.
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631176) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2940573 bytes OK
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.631193) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632355) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632376) EVENT_LOG_v1 {"time_micros": 1763932420632369, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.632397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4523514, prev total WAL file size 4523514, number of live WAL files 2.
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.633700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2871KB)], [54(14MB)]
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420633725, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17869174, "oldest_snapshot_seqno": -1}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6085 keys, 17720577 bytes, temperature: kUnknown
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420742407, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17720577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17676197, "index_size": 28078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154804, "raw_average_key_size": 25, "raw_value_size": 17562991, "raw_average_value_size": 2886, "num_data_blocks": 1153, "num_entries": 6085, "num_filter_entries": 6085, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.742603) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17720577 bytes
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.746059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 163.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 14.2 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6617, records dropped: 532 output_compression: NoCompression
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.746077) EVENT_LOG_v1 {"time_micros": 1763932420746070, "job": 32, "event": "compaction_finished", "compaction_time_micros": 108740, "compaction_time_cpu_micros": 40678, "output_level": 6, "num_output_files": 1, "total_output_size": 17720577, "num_input_records": 6617, "num_output_records": 6085, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420746643, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932420748998, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.633640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:40 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:13:40.749082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:13:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.558 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.559 230187 DEBUG oslo_concurrency.lockutils [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.560 230187 DEBUG nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:41 np0005532762 nova_compute[230183]: 2025-11-23 21:13:41.560 230187 WARNING nova.compute.manager [req-509eb8d3-cd45-476f-93ab-c382ec61f97a req-5d8edfaf-3589-4218-90b9-aae3742e6e19 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received unexpected event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with vm_state active and task_state None.#033[00m
Nov 23 16:13:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:42 np0005532762 nova_compute[230183]: 2025-11-23 21:13:42.329 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:42 np0005532762 nova_compute[230183]: 2025-11-23 21:13:42.814 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.112 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 NetworkManager[49021]: <info>  [1763932423.1148] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 23 16:13:43 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:43Z|00099|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 16:13:43 np0005532762 NetworkManager[49021]: <info>  [1763932423.1159] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 23 16:13:43 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:43Z|00100|binding|INFO|Releasing lport 6ab19126-935d-4e09-a163-fbca05fb1c6f from this chassis (sb_readonly=0)
Nov 23 16:13:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.586 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.587 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.587 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.588 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.588 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.589 230187 INFO nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Terminating instance#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.589 230187 DEBUG nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:13:43 np0005532762 kernel: tapba818b19-9f (unregistering): left promiscuous mode
Nov 23 16:13:43 np0005532762 NetworkManager[49021]: <info>  [1763932423.6294] device (tapba818b19-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:13:43 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:43Z|00101|binding|INFO|Releasing lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 from this chassis (sb_readonly=0)
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.643 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:43Z|00102|binding|INFO|Setting lport ba818b19-9f72-4242-b9d9-b1630b5d1f24 down in Southbound
Nov 23 16:13:43 np0005532762 ovn_controller[132845]: 2025-11-23T21:13:43Z|00103|binding|INFO|Removing iface tapba818b19-9f ovn-installed in OVS
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.647 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.669 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.673 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e6:fe 10.100.0.12'], port_security=['fa:16:3e:0d:e6:fe 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c73efbfb-509e-4eb2-af63-a65ba0f98094', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1655123038', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfd1f7f1-25d4-42fe-ac59-ece898bff9bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bc3d174-1770-40d5-b0cb-7f310bc5e484, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=ba818b19-9f72-4242-b9d9-b1630b5d1f24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.675 142158 INFO neutron.agent.ovn.metadata.agent [-] Port ba818b19-9f72-4242-b9d9-b1630b5d1f24 in datapath fd64d126-bc30-4f96-8737-9a4b1cf2fe8a unbound from our chassis#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.676 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.678 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[73a18728-34e4-46a8-8f75-942d1b2bfc6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.679 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a namespace which is not needed anymore#033[00m
Nov 23 16:13:43 np0005532762 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 23 16:13:43 np0005532762 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 4.350s CPU time.
Nov 23 16:13:43 np0005532762 systemd-machined[193469]: Machine qemu-5-instance-00000008 terminated.
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.720 230187 DEBUG nova.compute.manager [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.720 230187 DEBUG nova.compute.manager [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing instance network info cache due to event network-changed-ba818b19-9f72-4242-b9d9-b1630b5d1f24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.721 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Refreshing network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:13:43 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : haproxy version is 2.8.14-c23fe91
Nov 23 16:13:43 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [NOTICE]   (240719) : path to executable is /usr/sbin/haproxy
Nov 23 16:13:43 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [WARNING]  (240719) : Exiting Master process...
Nov 23 16:13:43 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [ALERT]    (240719) : Current worker (240721) exited with code 143 (Terminated)
Nov 23 16:13:43 np0005532762 neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a[240715]: [WARNING]  (240719) : All workers exited. Exiting... (0)
Nov 23 16:13:43 np0005532762 systemd[1]: libpod-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope: Deactivated successfully.
Nov 23 16:13:43 np0005532762 conmon[240715]: conmon aed257386fa7168d7aaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope/container/memory.events
Nov 23 16:13:43 np0005532762 podman[240756]: 2025-11-23 21:13:43.823392025 +0000 UTC m=+0.048601799 container died aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.829 230187 INFO nova.virt.libvirt.driver [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Instance destroyed successfully.#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.829 230187 DEBUG nova.objects.instance [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid c73efbfb-509e-4eb2-af63-a65ba0f98094 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.842 230187 DEBUG nova.virt.libvirt.vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1142109245',display_name='tempest-TestNetworkBasicOps-server-1142109245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1142109245',id=8,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA/OTvB3SF9HLz+tQB9k6+NtWY4GDi+dCLNTP2C1LVBWBWcF8hE2KwmFS1DV+sHHE6UrvKxVths55wvKBDKkRLk/bT3g1pE3soqIrQx5GQa2qNLkE7pPi6maRhw2rsAshw==',key_name='tempest-TestNetworkBasicOps-101179999',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:13:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-mndkc2jx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:13:40Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c73efbfb-509e-4eb2-af63-a65ba0f98094,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.843 230187 DEBUG nova.network.os_vif_util [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.845 230187 DEBUG nova.network.os_vif_util [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.846 230187 DEBUG os_vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.849 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.849 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba818b19-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.851 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a-userdata-shm.mount: Deactivated successfully.
Nov 23 16:13:43 np0005532762 systemd[1]: var-lib-containers-storage-overlay-9c3f4166f87894fa9c10f5c59d20e17390aa56344dd4fc7f9dd0c49158d6a062-merged.mount: Deactivated successfully.
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.859 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.861 230187 INFO os_vif [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e6:fe,bridge_name='br-int',has_traffic_filtering=True,id=ba818b19-9f72-4242-b9d9-b1630b5d1f24,network=Network(fd64d126-bc30-4f96-8737-9a4b1cf2fe8a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapba818b19-9f')#033[00m
Nov 23 16:13:43 np0005532762 podman[240756]: 2025-11-23 21:13:43.867182773 +0000 UTC m=+0.092392567 container cleanup aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:13:43 np0005532762 systemd[1]: libpod-conmon-aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a.scope: Deactivated successfully.
Nov 23 16:13:43 np0005532762 podman[240808]: 2025-11-23 21:13:43.931044517 +0000 UTC m=+0.045267580 container remove aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.936 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c21bd24c-2c13-4e5c-a611-bb002d3a8fa2]: (4, ('Sun Nov 23 09:13:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a (aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a)\naed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a\nSun Nov 23 09:13:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a (aed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a)\naed257386fa7168d7aafca0aa1ebaca1a27cc7ec76780c42d12be9a6c36ae44a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.938 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e91a99b3-a468-48c2-9fa2-adb9afc4e4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.939 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd64d126-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:13:43 np0005532762 kernel: tapfd64d126-b0: left promiscuous mode
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.941 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 nova_compute[230183]: 2025-11-23 21:13:43.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.956 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[aa732f42-46c8-400d-89e2-8fa4f226f16e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.974 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[21c94eb2-e8ff-4eb8-a289-286cffb259e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.975 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe4dc95-25a1-439c-94f1-2466a37ac21c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.994 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c73a4ca1-6c6f-44a3-989b-101de94f3337]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436792, 'reachable_time': 26001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240826, 'error': None, 'target': 'ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.996 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd64d126-bc30-4f96-8737-9a4b1cf2fe8a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:13:43 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:43.997 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef5268f-0265-49da-80d2-c2a242cf5d25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:13:43 np0005532762 systemd[1]: run-netns-ovnmeta\x2dfd64d126\x2dbc30\x2d4f96\x2d8737\x2d9a4b1cf2fe8a.mount: Deactivated successfully.
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.299 230187 INFO nova.virt.libvirt.driver [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deleting instance files /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094_del#033[00m
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.300 230187 INFO nova.virt.libvirt.driver [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deletion of /var/lib/nova/instances/c73efbfb-509e-4eb2-af63-a65ba0f98094_del complete#033[00m
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.358 230187 INFO nova.compute.manager [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.360 230187 DEBUG oslo.service.loopingcall [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.361 230187 DEBUG nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:13:44 np0005532762 nova_compute[230183]: 2025-11-23 21:13:44.361 230187 DEBUG nova.network.neutron [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:13:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.003000078s ======
Nov 23 16:13:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.052 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updated VIF entry in instance network info cache for port ba818b19-9f72-4242-b9d9-b1630b5d1f24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.054 230187 DEBUG nova.network.neutron [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [{"id": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "address": "fa:16:3e:0d:e6:fe", "network": {"id": "fd64d126-bc30-4f96-8737-9a4b1cf2fe8a", "bridge": "br-int", "label": "tempest-network-smoke--1300883220", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba818b19-9f", "ovs_interfaceid": "ba818b19-9f72-4242-b9d9-b1630b5d1f24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.072 230187 DEBUG oslo_concurrency.lockutils [req-78eceada-8483-4cb1-ada7-74c934ec2e8e req-6307c7ca-8b79-43ac-8d52-4b7e37ec93f1 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c73efbfb-509e-4eb2-af63-a65ba0f98094" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:13:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.775 230187 DEBUG nova.network.neutron [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.792 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.792 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-unplugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.793 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG oslo_concurrency.lockutils [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 DEBUG nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] No waiting events found dispatching network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.794 230187 WARNING nova.compute.manager [req-5500e33c-6c6f-4554-9444-5c3bd306dd0c req-50977513-41d8-4bab-8890-af002409110f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Received unexpected event network-vif-plugged-ba818b19-9f72-4242-b9d9-b1630b5d1f24 for instance with vm_state active and task_state deleting.#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.796 230187 INFO nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Took 1.44 seconds to deallocate network for instance.#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.830 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.830 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:45 np0005532762 nova_compute[230183]: 2025-11-23 21:13:45.884 230187 DEBUG oslo_concurrency.processutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:13:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:13:46 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2090295414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.331 230187 DEBUG oslo_concurrency.processutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.341 230187 DEBUG nova.compute.provider_tree [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.359 230187 DEBUG nova.scheduler.client.report [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.381 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.414 230187 INFO nova.scheduler.client.report [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance c73efbfb-509e-4eb2-af63-a65ba0f98094#033[00m
Nov 23 16:13:46 np0005532762 nova_compute[230183]: 2025-11-23 21:13:46.481 230187 DEBUG oslo_concurrency.lockutils [None req-22ed090a-2f1d-48d5-a77a-c1a127b1a72e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c73efbfb-509e-4eb2-af63-a65ba0f98094" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:46.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:47 np0005532762 nova_compute[230183]: 2025-11-23 21:13:47.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:13:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:13:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:48 np0005532762 nova_compute[230183]: 2025-11-23 21:13:48.852 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:49.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:50.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:13:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:13:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:13:51.070 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:13:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:52 np0005532762 nova_compute[230183]: 2025-11-23 21:13:52.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:52.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:53 np0005532762 nova_compute[230183]: 2025-11-23 21:13:53.854 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:13:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:13:56 np0005532762 podman[240883]: 2025-11-23 21:13:56.665162873 +0000 UTC m=+0.070634116 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:13:56 np0005532762 podman[240882]: 2025-11-23 21:13:56.675596811 +0000 UTC m=+0.088171634 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:13:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:13:57 np0005532762 nova_compute[230183]: 2025-11-23 21:13:57.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:13:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:13:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:13:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:13:58 np0005532762 nova_compute[230183]: 2025-11-23 21:13:58.828 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932423.826772, c73efbfb-509e-4eb2-af63-a65ba0f98094 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:13:58 np0005532762 nova_compute[230183]: 2025-11-23 21:13:58.828 230187 INFO nova.compute.manager [-] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:13:58 np0005532762 nova_compute[230183]: 2025-11-23 21:13:58.855 230187 DEBUG nova.compute.manager [None req-2234cdeb-2893-475f-834a-2b64cfecfdc7 - - - - - -] [instance: c73efbfb-509e-4eb2-af63-a65ba0f98094] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:13:58 np0005532762 nova_compute[230183]: 2025-11-23 21:13:58.855 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:13:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:13:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:13:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:13:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:01 np0005532762 nova_compute[230183]: 2025-11-23 21:14:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:01 np0005532762 podman[241011]: 2025-11-23 21:14:01.660426629 +0000 UTC m=+0.078872375 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 16:14:01 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:14:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:02 np0005532762 nova_compute[230183]: 2025-11-23 21:14:02.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:14:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:03 np0005532762 nova_compute[230183]: 2025-11-23 21:14:03.858 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.449 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.449 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:04.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:14:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1627076870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:14:04 np0005532762 nova_compute[230183]: 2025-11-23 21:14:04.886 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.018 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.019 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4923MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.020 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.020 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.066 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.066 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.090 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:14:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3260849041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.525 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.529 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.543 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.558 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:14:05 np0005532762 nova_compute[230183]: 2025-11-23 21:14:05.558 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.559 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.571 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.572 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:06.753 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:14:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:06.753 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:14:06 np0005532762 nova_compute[230183]: 2025-11-23 21:14:06.754 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:07 np0005532762 nova_compute[230183]: 2025-11-23 21:14:07.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:07 np0005532762 nova_compute[230183]: 2025-11-23 21:14:07.436 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:14:08 np0005532762 nova_compute[230183]: 2025-11-23 21:14:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:14:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:08 np0005532762 nova_compute[230183]: 2025-11-23 21:14:08.860 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:09.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:10 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:10.756 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:10 np0005532762 nova_compute[230183]: 2025-11-23 21:14:10.940 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:11 np0005532762 nova_compute[230183]: 2025-11-23 21:14:11.015 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:11.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:12 np0005532762 nova_compute[230183]: 2025-11-23 21:14:12.372 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:13 np0005532762 nova_compute[230183]: 2025-11-23 21:14:13.862 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:14.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:17 np0005532762 nova_compute[230183]: 2025-11-23 21:14:17.375 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:17.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:18 np0005532762 nova_compute[230183]: 2025-11-23 21:14:18.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:19.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:22 np0005532762 nova_compute[230183]: 2025-11-23 21:14:22.432 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:23 np0005532762 nova_compute[230183]: 2025-11-23 21:14:23.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:27 np0005532762 nova_compute[230183]: 2025-11-23 21:14:27.434 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:27 np0005532762 podman[241165]: 2025-11-23 21:14:27.669146941 +0000 UTC m=+0.083889920 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:14:27 np0005532762 podman[241166]: 2025-11-23 21:14:27.669924822 +0000 UTC m=+0.083904980 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:14:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:28 np0005532762 nova_compute[230183]: 2025-11-23 21:14:28.879 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.612 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.612 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.627 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.713 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.714 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.723 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.723 230187 INFO nova.compute.claims [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:14:29 np0005532762 nova_compute[230183]: 2025-11-23 21:14:29.823 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:14:30 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/128319179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.285 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.291 230187 DEBUG nova.compute.provider_tree [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.314 230187 DEBUG nova.scheduler.client.report [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.344 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.345 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.405 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.405 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.430 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.448 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.567 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.568 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.568 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating image(s)#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.586 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.607 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.626 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.629 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.695 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.696 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.697 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.697 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.717 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.720 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c833a97e-dc45-489f-98e1-a2d33397836c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:30 np0005532762 nova_compute[230183]: 2025-11-23 21:14:30.996 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 c833a97e-dc45-489f-98e1-a2d33397836c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.072 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.184 230187 DEBUG nova.objects.instance [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.200 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.201 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Ensure instance console log exists: /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.202 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:31 np0005532762 nova_compute[230183]: 2025-11-23 21:14:31.346 230187 DEBUG nova.policy [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:14:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:31.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:32 np0005532762 nova_compute[230183]: 2025-11-23 21:14:32.436 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:32 np0005532762 podman[241399]: 2025-11-23 21:14:32.626742395 +0000 UTC m=+0.047573510 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 23 16:14:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:32 np0005532762 nova_compute[230183]: 2025-11-23 21:14:32.975 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Successfully created port: b71755c1-8148-40c0-884d-aad83ae8602a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:14:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:33 np0005532762 nova_compute[230183]: 2025-11-23 21:14:33.882 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.070 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Successfully updated port: b71755c1-8148-40c0-884d-aad83ae8602a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.087 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG nova.compute.manager [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG nova.compute.manager [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.161 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:14:34 np0005532762 nova_compute[230183]: 2025-11-23 21:14:34.216 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:14:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.528 230187 DEBUG nova.network.neutron [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.548 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.549 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance network_info: |[{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.550 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.550 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.553 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start _get_guest_xml network_info=[{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.558 230187 WARNING nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.570 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.571 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.577 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.578 230187 DEBUG nova.virt.libvirt.host [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.578 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.579 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.579 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.580 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.581 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.582 230187 DEBUG nova.virt.hardware [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:14:35 np0005532762 nova_compute[230183]: 2025-11-23 21:14:35.585 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:35.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:35 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:14:35 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3264451218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.013 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.035 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.039 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:36 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:14:36 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/440052661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.465 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.466 230187 DEBUG nova.virt.libvirt.vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:14:30Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.467 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.468 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.469 230187 DEBUG nova.objects.instance [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.490 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <uuid>c833a97e-dc45-489f-98e1-a2d33397836c</uuid>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <name>instance-0000000a</name>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-582512634</nova:name>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:14:35</nova:creationTime>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <nova:port uuid="b71755c1-8148-40c0-884d-aad83ae8602a">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="serial">c833a97e-dc45-489f-98e1-a2d33397836c</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="uuid">c833a97e-dc45-489f-98e1-a2d33397836c</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/c833a97e-dc45-489f-98e1-a2d33397836c_disk">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/c833a97e-dc45-489f-98e1-a2d33397836c_disk.config">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:2a:70:ad"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <target dev="tapb71755c1-81"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/console.log" append="off"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:14:36 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:14:36 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:14:36 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:14:36 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.491 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Preparing to wait for external event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.492 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.493 230187 DEBUG nova.virt.libvirt.vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:14:30Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.494 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.494 230187 DEBUG nova.network.os_vif_util [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.495 230187 DEBUG os_vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.495 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.496 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.496 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb71755c1-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.500 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb71755c1-81, col_values=(('external_ids', {'iface-id': 'b71755c1-8148-40c0-884d-aad83ae8602a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:70:ad', 'vm-uuid': 'c833a97e-dc45-489f-98e1-a2d33397836c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.502 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:36 np0005532762 NetworkManager[49021]: <info>  [1763932476.5032] manager: (tapb71755c1-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.509 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.510 230187 INFO os_vif [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81')#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.548 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:2a:70:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.549 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Using config drive#033[00m
Nov 23 16:14:36 np0005532762 nova_compute[230183]: 2025-11-23 21:14:36.570 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.583 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Creating config drive at /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.588 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjumm1zo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:37.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.731 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjumm1zo" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.765 230187 DEBUG nova.storage.rbd_utils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image c833a97e-dc45-489f-98e1-a2d33397836c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.769 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config c833a97e-dc45-489f-98e1-a2d33397836c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.940 230187 DEBUG oslo_concurrency.processutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config c833a97e-dc45-489f-98e1-a2d33397836c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:14:37 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.942 230187 INFO nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deleting local config drive /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c/disk.config because it was imported into RBD.#033[00m
Nov 23 16:14:37 np0005532762 kernel: tapb71755c1-81: entered promiscuous mode
Nov 23 16:14:37 np0005532762 NetworkManager[49021]: <info>  [1763932477.9987] manager: (tapb71755c1-81): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 23 16:14:37 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:37Z|00104|binding|INFO|Claiming lport b71755c1-8148-40c0-884d-aad83ae8602a for this chassis.
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:37.999 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:38Z|00105|binding|INFO|b71755c1-8148-40c0-884d-aad83ae8602a: Claiming fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.021 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:70:ad 10.100.0.10'], port_security=['fa:16:3e:2a:70:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c833a97e-dc45-489f-98e1-a2d33397836c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01afc80e-05e3-4e44-a9a7-ca2439f76ab4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94ce2d65-f870-4d9e-a5f2-e431f68e3936, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=b71755c1-8148-40c0-884d-aad83ae8602a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.023 142158 INFO neutron.agent.ovn.metadata.agent [-] Port b71755c1-8148-40c0-884d-aad83ae8602a in datapath 33439544-e5f9-4500-9a9c-dbc1c4cd858c bound to our chassis#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.025 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33439544-e5f9-4500-9a9c-dbc1c4cd858c#033[00m
Nov 23 16:14:38 np0005532762 systemd-udevd[241555]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:14:38 np0005532762 NetworkManager[49021]: <info>  [1763932478.0395] device (tapb71755c1-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:14:38 np0005532762 NetworkManager[49021]: <info>  [1763932478.0403] device (tapb71755c1-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.042 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ae39b72b-a1ce-4b7b-992b-b83a9b165b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.045 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33439544-e1 in ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:14:38 np0005532762 systemd-machined[193469]: New machine qemu-6-instance-0000000a.
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.047 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33439544-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.047 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab0238c-ba55-4f6f-8691-49cd850e594f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.049 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b3cea3a1-a173-4282-ae0c-bedf9552293a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.063 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[0c24c3cc-634f-4cbf-8226-533d8c398504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.089 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[79f52c09-6b56-4a7f-86f7-1cb53ab2091c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 systemd[1]: Started Virtual Machine qemu-6-instance-0000000a.
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:38Z|00106|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a ovn-installed in OVS
Nov 23 16:14:38 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:38Z|00107|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a up in Southbound
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.107 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.116 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[f54bf081-271e-4364-9b9c-d865becf9add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.120 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0700fbf1-0a1b-45d2-b97f-ea6ccd2df5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 NetworkManager[49021]: <info>  [1763932478.1253] manager: (tap33439544-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.154 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf809797-c6db-4855-bc63-50e0e9663a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.157 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d330d7-442e-4029-9753-10a328293308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 NetworkManager[49021]: <info>  [1763932478.1799] device (tap33439544-e0): carrier: link connected
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.187 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[76344240-17e5-468a-b712-46bc56c1937c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.205 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[62e2e13e-ec90-4ed1-a3b6-df9603a568d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33439544-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:1d:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442675, 'reachable_time': 25570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241591, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.221 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[3d308a2c-685d-4764-944a-d43c6697b56f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:1d6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442675, 'tstamp': 442675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241592, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.239 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[698cbdb4-6464-450c-a6f3-cc9b4656bfa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33439544-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:1d:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442675, 'reachable_time': 25570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241593, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.267 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf18e2b-8afa-46df-90de-b57b91901771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.338 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cf53605a-e6d4-449a-867e-37f0265e8cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.339 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33439544-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.340 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.340 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33439544-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.342 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 NetworkManager[49021]: <info>  [1763932478.3430] manager: (tap33439544-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 23 16:14:38 np0005532762 kernel: tap33439544-e0: entered promiscuous mode
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.346 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33439544-e0, col_values=(('external_ids', {'iface-id': '3160bfc6-c855-4bc2-a26d-97781eac404c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:14:38 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:38Z|00108|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.352 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.355 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.355 230187 DEBUG nova.network.neutron [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.366 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.367 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.367 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[b536def7-e62d-4c8c-a584-5d554a784cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.368 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-33439544-e5f9-4500-9a9c-dbc1c4cd858c
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/33439544-e5f9-4500-9a9c-dbc1c4cd858c.pid.haproxy
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 33439544-e5f9-4500-9a9c-dbc1c4cd858c
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:14:38 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:38.370 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'env', 'PROCESS_TAG=haproxy-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33439544-e5f9-4500-9a9c-dbc1c4cd858c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.373 230187 DEBUG oslo_concurrency.lockutils [req-79bc167b-e41f-4be0-b9c5-20b19be51c88 req-892a5d5b-35c7-4c68-80b6-40fa7eba739a 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.584 230187 DEBUG nova.compute.manager [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.585 230187 DEBUG oslo_concurrency.lockutils [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.586 230187 DEBUG nova.compute.manager [req-392ec6ea-38a4-40d6-955c-d42d5d957a3c req-fea96aec-231b-4cc4-ac33-bbd00cffd22d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Processing event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:14:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.707 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7071743, c833a97e-dc45-489f-98e1-a2d33397836c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.708 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Started (Lifecycle Event)#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.710 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.712 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.715 230187 INFO nova.virt.libvirt.driver [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance spawned successfully.#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.715 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.755 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.759 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.759 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.760 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.761 230187 DEBUG nova.virt.libvirt.driver [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:14:38 np0005532762 podman[241667]: 2025-11-23 21:14:38.761530395 +0000 UTC m=+0.048206297 container create e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.765 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:14:38 np0005532762 systemd[1]: Started libpod-conmon-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope.
Nov 23 16:14:38 np0005532762 podman[241667]: 2025-11-23 21:14:38.735747388 +0000 UTC m=+0.022423290 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.847 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.848 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7072747, c833a97e-dc45-489f-98e1-a2d33397836c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.848 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:14:38 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:14:38 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d68e6c64db3a5cf013782d6e596c5e985b2d9de44852e9704045df6024ec0e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:14:38 np0005532762 podman[241667]: 2025-11-23 21:14:38.863983421 +0000 UTC m=+0.150659373 container init e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:14:38 np0005532762 podman[241667]: 2025-11-23 21:14:38.869165778 +0000 UTC m=+0.155841690 container start e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 16:14:38 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : New worker (241688) forked
Nov 23 16:14:38 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : Loading success.
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.943 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.946 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932478.7122378, c833a97e-dc45-489f-98e1-a2d33397836c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.947 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.969 230187 INFO nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 8.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.969 230187 DEBUG nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.970 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:14:38 np0005532762 nova_compute[230183]: 2025-11-23 21:14:38.976 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:14:39 np0005532762 nova_compute[230183]: 2025-11-23 21:14:39.004 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:14:39 np0005532762 nova_compute[230183]: 2025-11-23 21:14:39.041 230187 INFO nova.compute.manager [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 9.36 seconds to build instance.#033[00m
Nov 23 16:14:39 np0005532762 nova_compute[230183]: 2025-11-23 21:14:39.055 230187 DEBUG oslo_concurrency.lockutils [None req-d3546b50-fe88-4271-a79b-73a345b546a8 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.405074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479405119, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1110, "num_deletes": 503, "total_data_size": 1749201, "memory_usage": 1777744, "flush_reason": "Manual Compaction"}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479416439, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1006534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30263, "largest_seqno": 31368, "table_properties": {"data_size": 1002115, "index_size": 1559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13867, "raw_average_key_size": 19, "raw_value_size": 991050, "raw_average_value_size": 1393, "num_data_blocks": 68, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932421, "oldest_key_time": 1763932421, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 11445 microseconds, and 6431 cpu microseconds.
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.416515) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1006534 bytes OK
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.416543) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418080) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418104) EVENT_LOG_v1 {"time_micros": 1763932479418096, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.418127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1742764, prev total WAL file size 1742764, number of live WAL files 2.
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.419128) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(982KB)], [57(16MB)]
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479419179, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18727111, "oldest_snapshot_seqno": -1}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5784 keys, 12640468 bytes, temperature: kUnknown
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479605971, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12640468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12603743, "index_size": 21191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149716, "raw_average_key_size": 25, "raw_value_size": 12501307, "raw_average_value_size": 2161, "num_data_blocks": 849, "num_entries": 5784, "num_filter_entries": 5784, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.606196) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12640468 bytes
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.609521) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.2 rd, 67.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.9 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(31.2) write-amplify(12.6) OK, records in: 6796, records dropped: 1012 output_compression: NoCompression
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.609537) EVENT_LOG_v1 {"time_micros": 1763932479609530, "job": 34, "event": "compaction_finished", "compaction_time_micros": 186845, "compaction_time_cpu_micros": 48574, "output_level": 6, "num_output_files": 1, "total_output_size": 12640468, "num_input_records": 6796, "num_output_records": 5784, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479609768, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932479612322, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.419036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:14:39.612402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:14:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.674 230187 DEBUG nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.675 230187 DEBUG oslo_concurrency.lockutils [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.676 230187 DEBUG nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:14:40 np0005532762 nova_compute[230183]: 2025-11-23 21:14:40.676 230187 WARNING nova.compute.manager [req-2312e00e-4a01-44fd-8fd9-fd08cc61cf80 req-be87a50e-25fc-4c3c-b437-3be6a1a3758f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received unexpected event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with vm_state active and task_state None.#033[00m
Nov 23 16:14:41 np0005532762 nova_compute[230183]: 2025-11-23 21:14:41.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:42 np0005532762 nova_compute[230183]: 2025-11-23 21:14:42.480 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:42.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:46 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:46Z|00109|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 16:14:46 np0005532762 NetworkManager[49021]: <info>  [1763932486.0500] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 23 16:14:46 np0005532762 NetworkManager[49021]: <info>  [1763932486.0513] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.061 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:46 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:46Z|00110|binding|INFO|Releasing lport 3160bfc6-c855-4bc2-a26d-97781eac404c from this chassis (sb_readonly=0)
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.084 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.088 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.472 230187 DEBUG nova.compute.manager [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.472 230187 DEBUG nova.compute.manager [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.473 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:14:46 np0005532762 nova_compute[230183]: 2025-11-23 21:14:46.504 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:47 np0005532762 nova_compute[230183]: 2025-11-23 21:14:47.482 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:49 np0005532762 nova_compute[230183]: 2025-11-23 21:14:49.214 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:14:49 np0005532762 nova_compute[230183]: 2025-11-23 21:14:49.215 230187 DEBUG nova.network.neutron [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:14:49 np0005532762 nova_compute[230183]: 2025-11-23 21:14:49.231 230187 DEBUG oslo_concurrency.lockutils [req-99572406-97f2-4bc5-9f00-73495233bdeb req-e7ae87d9-8c61-4466-a63e-b9beb7ee4e47 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:14:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.071 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:14:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:14:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:14:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:14:51 np0005532762 nova_compute[230183]: 2025-11-23 21:14:51.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:51.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:52 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:52Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:14:52 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:52Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:14:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:52 np0005532762 nova_compute[230183]: 2025-11-23 21:14:52.484 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:56 np0005532762 nova_compute[230183]: 2025-11-23 21:14:56.508 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:14:57 np0005532762 nova_compute[230183]: 2025-11-23 21:14:57.486 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:14:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:14:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:14:58 np0005532762 podman[241736]: 2025-11-23 21:14:58.664646842 +0000 UTC m=+0.066504925 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 16:14:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:14:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:14:58 np0005532762 podman[241735]: 2025-11-23 21:14:58.711170425 +0000 UTC m=+0.112087813 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:14:58 np0005532762 nova_compute[230183]: 2025-11-23 21:14:58.920 230187 INFO nova.compute.manager [None req-4864e87e-b5c5-41e1-a783-f5e802fe6f1e 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Get console output#033[00m
Nov 23 16:14:58 np0005532762 nova_compute[230183]: 2025-11-23 21:14:58.925 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:14:59 np0005532762 ovn_controller[132845]: 2025-11-23T21:14:59Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:14:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:14:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:14:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:14:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:01 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:01Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:15:01 np0005532762 nova_compute[230183]: 2025-11-23 21:15:01.511 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:01.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:02 np0005532762 nova_compute[230183]: 2025-11-23 21:15:02.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:02 np0005532762 nova_compute[230183]: 2025-11-23 21:15:02.489 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:03 np0005532762 podman[241783]: 2025-11-23 21:15:03.648631436 +0000 UTC m=+0.062325015 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 16:15:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:03.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.451 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.451 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3371525506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.915 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.973 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:15:04 np0005532762 nova_compute[230183]: 2025-11-23 21:15:04.973 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.148 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4734MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.205 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance c833a97e-dc45-489f-98e1-a2d33397836c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.205 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.206 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.236 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3922220823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.702 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.710 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.734 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.762 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.763 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.792 230187 DEBUG nova.compute.manager [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.793 230187 DEBUG nova.compute.manager [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing instance network info cache due to event network-changed-b71755c1-8148-40c0-884d-aad83ae8602a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.793 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.794 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.794 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Refreshing network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:15:05 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:05Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:70:ad 10.100.0.10
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.945 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.945 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.946 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.947 230187 INFO nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Terminating instance#033[00m
Nov 23 16:15:05 np0005532762 nova_compute[230183]: 2025-11-23 21:15:05.948 230187 DEBUG nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:15:05 np0005532762 kernel: tapb71755c1-81 (unregistering): left promiscuous mode
Nov 23 16:15:06 np0005532762 NetworkManager[49021]: <info>  [1763932506.0014] device (tapb71755c1-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:15:06 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:06Z|00111|binding|INFO|Releasing lport b71755c1-8148-40c0-884d-aad83ae8602a from this chassis (sb_readonly=0)
Nov 23 16:15:06 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:06Z|00112|binding|INFO|Setting lport b71755c1-8148-40c0-884d-aad83ae8602a down in Southbound
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.005 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:06Z|00113|binding|INFO|Removing iface tapb71755c1-81 ovn-installed in OVS
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.015 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:70:ad 10.100.0.10'], port_security=['fa:16:3e:2a:70:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c833a97e-dc45-489f-98e1-a2d33397836c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01afc80e-05e3-4e44-a9a7-ca2439f76ab4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94ce2d65-f870-4d9e-a5f2-e431f68e3936, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=b71755c1-8148-40c0-884d-aad83ae8602a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.016 142158 INFO neutron.agent.ovn.metadata.agent [-] Port b71755c1-8148-40c0-884d-aad83ae8602a in datapath 33439544-e5f9-4500-9a9c-dbc1c4cd858c unbound from our chassis#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.017 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33439544-e5f9-4500-9a9c-dbc1c4cd858c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.019 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69e52c-bd29-4d17-84e7-9671b44821ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.020 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c namespace which is not needed anymore#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.025 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 23 16:15:06 np0005532762 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Consumed 14.183s CPU time.
Nov 23 16:15:06 np0005532762 systemd-machined[193469]: Machine qemu-6-instance-0000000a terminated.
Nov 23 16:15:06 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : haproxy version is 2.8.14-c23fe91
Nov 23 16:15:06 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [NOTICE]   (241686) : path to executable is /usr/sbin/haproxy
Nov 23 16:15:06 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [WARNING]  (241686) : Exiting Master process...
Nov 23 16:15:06 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [ALERT]    (241686) : Current worker (241688) exited with code 143 (Terminated)
Nov 23 16:15:06 np0005532762 neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c[241682]: [WARNING]  (241686) : All workers exited. Exiting... (0)
Nov 23 16:15:06 np0005532762 systemd[1]: libpod-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope: Deactivated successfully.
Nov 23 16:15:06 np0005532762 podman[241875]: 2025-11-23 21:15:06.157816036 +0000 UTC m=+0.044478718 container died e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.181 230187 INFO nova.virt.libvirt.driver [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Instance destroyed successfully.#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.181 230187 DEBUG nova.objects.instance [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid c833a97e-dc45-489f-98e1-a2d33397836c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:15:06 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53-userdata-shm.mount: Deactivated successfully.
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.193 230187 DEBUG nova.virt.libvirt.vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-582512634',display_name='tempest-TestNetworkBasicOps-server-582512634',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-582512634',id=10,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQNYwhSj43+WrihAMXXrcf3ffitbakZUmwhOuijrPcqM40TmVQc3wMfjU/cZyNHNeBKw0TKec9vXExOxmFIsncMN4D0yIYffuxIytj1M98N5vK6pCD4pL97G7XeskRufg==',key_name='tempest-TestNetworkBasicOps-1166238543',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:14:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-2k77ohtd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:14:39Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=c833a97e-dc45-489f-98e1-a2d33397836c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.194 230187 DEBUG nova.network.os_vif_util [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:15:06 np0005532762 systemd[1]: var-lib-containers-storage-overlay-1d68e6c64db3a5cf013782d6e596c5e985b2d9de44852e9704045df6024ec0e3-merged.mount: Deactivated successfully.
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.195 230187 DEBUG nova.network.os_vif_util [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.195 230187 DEBUG os_vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.197 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.197 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb71755c1-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.201 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.202 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.204 230187 INFO os_vif [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:70:ad,bridge_name='br-int',has_traffic_filtering=True,id=b71755c1-8148-40c0-884d-aad83ae8602a,network=Network(33439544-e5f9-4500-9a9c-dbc1c4cd858c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb71755c1-81')#033[00m
Nov 23 16:15:06 np0005532762 podman[241875]: 2025-11-23 21:15:06.209121126 +0000 UTC m=+0.095783808 container cleanup e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:15:06 np0005532762 systemd[1]: libpod-conmon-e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53.scope: Deactivated successfully.
Nov 23 16:15:06 np0005532762 podman[241929]: 2025-11-23 21:15:06.280060149 +0000 UTC m=+0.045805613 container remove e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.286 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[38eb9252-d571-4eff-960b-d3abad976ff4]: (4, ('Sun Nov 23 09:15:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c (e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53)\ne67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53\nSun Nov 23 09:15:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c (e67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53)\ne67131015f7e1d183875087188ccc15f5dc5862c44132df760e6ce2f33a33b53\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.287 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1d1a39-69a2-4034-9593-348ea0f86c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.288 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33439544-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:06 np0005532762 kernel: tap33439544-e0: left promiscuous mode
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.291 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.302 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.304 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[78d82b9c-666f-4c9e-a58e-f55e8c47f97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.318 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9d944-61ab-4875-9382-f030aff5dc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.319 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5e592c75-467a-4a07-bf7b-924b1996a8f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.332 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5173d6b4-1693-41f8-affa-9aa1a505fe3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442668, 'reachable_time': 43259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241970, 'error': None, 'target': 'ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.334 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33439544-e5f9-4500-9a9c-dbc1c4cd858c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:15:06 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:06.334 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[32899b22-19ee-411f-aca7-5d7205ffdd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:06 np0005532762 systemd[1]: run-netns-ovnmeta\x2d33439544\x2de5f9\x2d4500\x2d9a9c\x2ddbc1c4cd858c.mount: Deactivated successfully.
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.484 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.484 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.485 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.485 230187 DEBUG oslo_concurrency.lockutils [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.486 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.486 230187 DEBUG nova.compute.manager [req-0146ef5e-3c77-43ba-99e3-cc777819796e req-00cf655d-fd98-4b63-8758-c9dbaee84980 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-unplugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.668 230187 INFO nova.virt.libvirt.driver [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deleting instance files /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c_del#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.669 230187 INFO nova.virt.libvirt.driver [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deletion of /var/lib/nova/instances/c833a97e-dc45-489f-98e1-a2d33397836c_del complete#033[00m
Nov 23 16:15:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.717 230187 INFO nova.compute.manager [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.717 230187 DEBUG oslo.service.loopingcall [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.718 230187 DEBUG nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.718 230187 DEBUG nova.network.neutron [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.759 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.760 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.760 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:06 np0005532762 nova_compute[230183]: 2025-11-23 21:15:06.761 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:15:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.350 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updated VIF entry in instance network info cache for port b71755c1-8148-40c0-884d-aad83ae8602a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.351 230187 DEBUG nova.network.neutron [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [{"id": "b71755c1-8148-40c0-884d-aad83ae8602a", "address": "fa:16:3e:2a:70:ad", "network": {"id": "33439544-e5f9-4500-9a9c-dbc1c4cd858c", "bridge": "br-int", "label": "tempest-network-smoke--1819281856", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb71755c1-81", "ovs_interfaceid": "b71755c1-8148-40c0-884d-aad83ae8602a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.373 230187 DEBUG oslo_concurrency.lockutils [req-1905cd28-440c-4fba-b36a-091cd5409f2e req-19acd4d8-dc90-4efa-938b-114b92b37572 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-c833a97e-dc45-489f-98e1-a2d33397836c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.490 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:07.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.710 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:07 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:07.711 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:15:07 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:07.712 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.749 230187 DEBUG nova.network.neutron [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.765 230187 INFO nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Took 1.05 seconds to deallocate network for instance.#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.813 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.813 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:07 np0005532762 nova_compute[230183]: 2025-11-23 21:15:07.856 230187 DEBUG oslo_concurrency.processutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:08 np0005532762 podman[242106]: 2025-11-23 21:15:08.026857729 +0000 UTC m=+0.053493359 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 23 16:15:08 np0005532762 podman[242106]: 2025-11-23 21:15:08.122366398 +0000 UTC m=+0.149002028 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 23 16:15:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3253881524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.324 230187 DEBUG oslo_concurrency.processutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.331 230187 DEBUG nova.compute.provider_tree [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.345 230187 DEBUG nova.scheduler.client.report [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.365 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.398 230187 INFO nova.scheduler.client.report [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance c833a97e-dc45-489f-98e1-a2d33397836c#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.490 230187 DEBUG oslo_concurrency.lockutils [None req-b4ecd0f2-57bc-4f5f-8712-c44e601e6c0b 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:08 np0005532762 podman[242239]: 2025-11-23 21:15:08.564792378 +0000 UTC m=+0.051799973 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.593 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG oslo_concurrency.lockutils [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "c833a97e-dc45-489f-98e1-a2d33397836c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] No waiting events found dispatching network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.594 230187 WARNING nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received unexpected event network-vif-plugged-b71755c1-8148-40c0-884d-aad83ae8602a for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:15:08 np0005532762 nova_compute[230183]: 2025-11-23 21:15:08.595 230187 DEBUG nova.compute.manager [req-163a40c2-c79b-43cc-86ed-562d20f882d2 req-b86bc670-be50-442c-94e0-a84d124cff40 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Received event network-vif-deleted-b71755c1-8148-40c0-884d-aad83ae8602a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:08 np0005532762 podman[242239]: 2025-11-23 21:15:08.602563017 +0000 UTC m=+0.089570582 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:15:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:08.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:09 np0005532762 podman[242370]: 2025-11-23 21:15:09.079589661 +0000 UTC m=+0.052102292 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:15:09 np0005532762 podman[242370]: 2025-11-23 21:15:09.089251219 +0000 UTC m=+0.061763850 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:15:09 np0005532762 podman[242436]: 2025-11-23 21:15:09.301959397 +0000 UTC m=+0.048276860 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public)
Nov 23 16:15:09 np0005532762 podman[242436]: 2025-11-23 21:15:09.316199557 +0000 UTC m=+0.062516990 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc.)
Nov 23 16:15:09 np0005532762 nova_compute[230183]: 2025-11-23 21:15:09.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:15:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:09.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:11 np0005532762 nova_compute[230183]: 2025-11-23 21:15:11.238 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:11.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:11 np0005532762 nova_compute[230183]: 2025-11-23 21:15:11.699 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:11 np0005532762 nova_compute[230183]: 2025-11-23 21:15:11.783 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:12 np0005532762 nova_compute[230183]: 2025-11-23 21:15:12.492 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:13 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:15:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:14 np0005532762 ceph-mon[80135]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 23 16:15:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:15 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:15.714 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:16 np0005532762 nova_compute[230183]: 2025-11-23 21:15:16.243 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:17 np0005532762 nova_compute[230183]: 2025-11-23 21:15:17.494 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:19 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:15:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:21 np0005532762 nova_compute[230183]: 2025-11-23 21:15:21.175 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932506.173835, c833a97e-dc45-489f-98e1-a2d33397836c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:15:21 np0005532762 nova_compute[230183]: 2025-11-23 21:15:21.175 230187 INFO nova.compute.manager [-] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:15:21 np0005532762 nova_compute[230183]: 2025-11-23 21:15:21.202 230187 DEBUG nova.compute.manager [None req-25a00114-cb4e-4e3e-9f7a-324ada7a1362 - - - - - -] [instance: c833a97e-dc45-489f-98e1-a2d33397836c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:15:21 np0005532762 nova_compute[230183]: 2025-11-23 21:15:21.246 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:21.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:22 np0005532762 nova_compute[230183]: 2025-11-23 21:15:22.496 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:22.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:24.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:26 np0005532762 nova_compute[230183]: 2025-11-23 21:15:26.249 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:27 np0005532762 nova_compute[230183]: 2025-11-23 21:15:27.498 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:29 np0005532762 podman[242679]: 2025-11-23 21:15:29.677753669 +0000 UTC m=+0.088989134 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:15:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:29 np0005532762 podman[242678]: 2025-11-23 21:15:29.762653326 +0000 UTC m=+0.173848140 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 16:15:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:31 np0005532762 nova_compute[230183]: 2025-11-23 21:15:31.251 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:31.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:32 np0005532762 nova_compute[230183]: 2025-11-23 21:15:32.499 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:33.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:34 np0005532762 podman[242725]: 2025-11-23 21:15:34.64472733 +0000 UTC m=+0.059562262 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:15:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:34.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:36 np0005532762 nova_compute[230183]: 2025-11-23 21:15:36.255 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:36.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:37 np0005532762 nova_compute[230183]: 2025-11-23 21:15:37.502 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:39.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:41 np0005532762 nova_compute[230183]: 2025-11-23 21:15:41.259 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:41.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:42 np0005532762 nova_compute[230183]: 2025-11-23 21:15:42.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:15:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:15:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:43.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.481 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.481 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.510 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.626 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.626 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.637 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.638 230187 INFO nova.compute.claims [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:15:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:44.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:44 np0005532762 nova_compute[230183]: 2025-11-23 21:15:44.743 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:15:45 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1589551055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.218 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.224 230187 DEBUG nova.compute.provider_tree [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.239 230187 DEBUG nova.scheduler.client.report [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.261 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.262 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.328 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.328 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.341 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.359 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.439 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.440 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.441 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating image(s)#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.474 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.498 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.521 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.523 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.572 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.573 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.574 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.574 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.597 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.599 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4384cda9-2a35-4df4-84b1-a045a41852ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:45.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.838 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 4384cda9-2a35-4df4-84b1-a045a41852ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.889 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.972 230187 DEBUG nova.objects.instance [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Ensure instance console log exists: /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.983 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.984 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:45 np0005532762 nova_compute[230183]: 2025-11-23 21:15:45.984 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:46 np0005532762 nova_compute[230183]: 2025-11-23 21:15:46.075 230187 DEBUG nova.policy [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:15:46 np0005532762 nova_compute[230183]: 2025-11-23 21:15:46.262 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:47 np0005532762 nova_compute[230183]: 2025-11-23 21:15:47.505 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:47.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:48 np0005532762 nova_compute[230183]: 2025-11-23 21:15:48.468 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Successfully created port: deb2e9cc-993f-4f9a-934e-0921fdf22170 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:15:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:48.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:48 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:48Z|00114|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 23 16:15:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.471 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Successfully updated port: deb2e9cc-993f-4f9a-934e-0921fdf22170 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.484 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.485 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.485 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.585 230187 DEBUG nova.compute.manager [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.586 230187 DEBUG nova.compute.manager [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:15:50 np0005532762 nova_compute[230183]: 2025-11-23 21:15:50.586 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:15:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:50.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.072 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:51.073 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:51 np0005532762 nova_compute[230183]: 2025-11-23 21:15:51.266 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:51 np0005532762 nova_compute[230183]: 2025-11-23 21:15:51.341 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:15:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.506 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.773 230187 DEBUG nova.network.neutron [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.793 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.794 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance network_info: |[{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.795 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.795 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.800 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start _get_guest_xml network_info=[{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.805 230187 WARNING nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.815 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.815 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.819 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.libvirt.host [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.820 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.821 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.821 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.822 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.823 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.824 230187 DEBUG nova.virt.hardware [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:15:52 np0005532762 nova_compute[230183]: 2025-11-23 21:15:52.826 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:15:53 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/775750339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.279 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.318 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.324 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:15:53 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1432727154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.774 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.775 230187 DEBUG nova.virt.libvirt.vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:15:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.776 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.776 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.777 230187 DEBUG nova.objects.instance [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <uuid>4384cda9-2a35-4df4-84b1-a045a41852ac</uuid>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <name>instance-0000000c</name>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-1855303843</nova:name>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:15:52</nova:creationTime>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <nova:port uuid="deb2e9cc-993f-4f9a-934e-0921fdf22170">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="serial">4384cda9-2a35-4df4-84b1-a045a41852ac</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="uuid">4384cda9-2a35-4df4-84b1-a045a41852ac</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/4384cda9-2a35-4df4-84b1-a045a41852ac_disk">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:19:cb:53"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <target dev="tapdeb2e9cc-99"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/console.log" append="off"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:15:53 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:15:53 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:15:53 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:15:53 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Preparing to wait for external event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.791 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG nova.virt.libvirt.vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:15:45Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.792 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.793 230187 DEBUG nova.network.os_vif_util [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.793 230187 DEBUG os_vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.794 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.797 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.798 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdeb2e9cc-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.798 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdeb2e9cc-99, col_values=(('external_ids', {'iface-id': 'deb2e9cc-993f-4f9a-934e-0921fdf22170', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:cb:53', 'vm-uuid': '4384cda9-2a35-4df4-84b1-a045a41852ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:53 np0005532762 NetworkManager[49021]: <info>  [1763932553.8009] manager: (tapdeb2e9cc-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.802 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.807 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.807 230187 INFO os_vif [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99')#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.857 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:19:cb:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.858 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Using config drive#033[00m
Nov 23 16:15:53 np0005532762 nova_compute[230183]: 2025-11-23 21:15:53.879 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.379 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.379 230187 DEBUG nova.network.neutron [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.394 230187 DEBUG oslo_concurrency.lockutils [req-af1d7d1e-2630-4c55-b3f5-37d9abe8dcee req-e6298979-66b6-44ec-b74f-c3e8e13eba9d 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.539 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Creating config drive at /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.544 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wst1rus execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.667 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wst1rus" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.694 230187 DEBUG nova.storage.rbd_utils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.697 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:15:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.869 230187 DEBUG oslo_concurrency.processutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config 4384cda9-2a35-4df4-84b1-a045a41852ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.870 230187 INFO nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deleting local config drive /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac/disk.config because it was imported into RBD.#033[00m
Nov 23 16:15:54 np0005532762 kernel: tapdeb2e9cc-99: entered promiscuous mode
Nov 23 16:15:54 np0005532762 NetworkManager[49021]: <info>  [1763932554.9108] manager: (tapdeb2e9cc-99): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Nov 23 16:15:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:54Z|00115|binding|INFO|Claiming lport deb2e9cc-993f-4f9a-934e-0921fdf22170 for this chassis.
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.910 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:54 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:54Z|00116|binding|INFO|deb2e9cc-993f-4f9a-934e-0921fdf22170: Claiming fa:16:3e:19:cb:53 10.100.0.14
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.916 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.918 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.923 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:54 np0005532762 nova_compute[230183]: 2025-11-23 21:15:54.924 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:54 np0005532762 NetworkManager[49021]: <info>  [1763932554.9266] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 23 16:15:54 np0005532762 NetworkManager[49021]: <info>  [1763932554.9272] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.929 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cb:53 10.100.0.14'], port_security=['fa:16:3e:19:cb:53 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4384cda9-2a35-4df4-84b1-a045a41852ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b48e986-896c-496c-81ed-a29a0452333b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=831ed7cd-9739-4cae-9853-0a7c3c8eb72f, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=deb2e9cc-993f-4f9a-934e-0921fdf22170) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.930 142158 INFO neutron.agent.ovn.metadata.agent [-] Port deb2e9cc-993f-4f9a-934e-0921fdf22170 in datapath 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 bound to our chassis#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.931 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7#033[00m
Nov 23 16:15:54 np0005532762 systemd-machined[193469]: New machine qemu-7-instance-0000000c.
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.941 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[436a0b9e-b3ef-40ad-904a-8d97c8539b85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.942 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b2cbb2b-41 in ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.943 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b2cbb2b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.943 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7914774e-451f-4b16-a867-b1fe72c4f05b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.944 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[25995d34-1d05-4d96-b6d6-b895f8cf9def]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.957 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[20825745-9244-4681-95f7-c783513f1b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:54 np0005532762 systemd[1]: Started Virtual Machine qemu-7-instance-0000000c.
Nov 23 16:15:54 np0005532762 systemd-udevd[243108]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:15:54 np0005532762 NetworkManager[49021]: <info>  [1763932554.9789] device (tapdeb2e9cc-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:15:54 np0005532762 NetworkManager[49021]: <info>  [1763932554.9800] device (tapdeb2e9cc-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:15:54 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:54.981 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[cddcde1f-2721-4e52-b9c5-588cf776eb83]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.006 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4b6baa-a916-4349-a82a-69b63c01c781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.013 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 NetworkManager[49021]: <info>  [1763932555.0193] manager: (tap2b2cbb2b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.019 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5413d493-3ab7-4ffc-9543-a3f05b2c390e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:55Z|00117|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 ovn-installed in OVS
Nov 23 16:15:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:55Z|00118|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 up in Southbound
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.027 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.047 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb35861-23e0-4cd1-a838-cd9ba058e76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.049 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe324592-316b-43bc-8d69-9b0a8f198701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 NetworkManager[49021]: <info>  [1763932555.0673] device (tap2b2cbb2b-40): carrier: link connected
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.071 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[c63c9a68-1897-4915-841d-e5ac4ecbcab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.085 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[2181b6e8-c861-459d-8fa7-1ef75d80df64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b2cbb2b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450363, 'reachable_time': 35564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243138, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.100 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[405e5244-489e-4427-8556-ef919c076cc5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:f637'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450363, 'tstamp': 450363}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243139, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.112 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb8d225-92aa-4b2c-b986-4498fd789a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b2cbb2b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f6:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450363, 'reachable_time': 35564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243140, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.140 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4705223b-f14b-43fb-aae5-4945bb46f42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.191 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[ed237c63-3025-42f2-9613-d87dc71a8e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b2cbb2b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.192 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b2cbb2b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:55 np0005532762 kernel: tap2b2cbb2b-40: entered promiscuous mode
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.194 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 NetworkManager[49021]: <info>  [1763932555.1960] manager: (tap2b2cbb2b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.198 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.199 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b2cbb2b-40, col_values=(('external_ids', {'iface-id': '7a9e60a2-aaf5-412e-8508-c425a028014e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.200 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 ovn_controller[132845]: 2025-11-23T21:15:55Z|00119|binding|INFO|Releasing lport 7a9e60a2-aaf5-412e-8508-c425a028014e from this chassis (sb_readonly=0)
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.200 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.201 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.202 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[35f099f9-c01e-4ebd-a42a-1e42b4a6d906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.203 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.pid.haproxy
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:15:55 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:15:55.204 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'env', 'PROCESS_TAG=haproxy-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG nova.compute.manager [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.468 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.469 230187 DEBUG oslo_concurrency.lockutils [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.469 230187 DEBUG nova.compute.manager [req-f4620e74-8350-4a9b-bdb7-01d47cc8d78a req-280107a9-398d-4a73-ba39-e1ba058e783f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Processing event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:15:55 np0005532762 podman[243172]: 2025-11-23 21:15:55.537964837 +0000 UTC m=+0.051331941 container create 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 16:15:55 np0005532762 systemd[1]: Started libpod-conmon-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope.
Nov 23 16:15:55 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:15:55 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18067dbd2039a8291e31d1524e9c7847c294eb0b15485a8b90bbade1b71fdea0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:15:55 np0005532762 podman[243172]: 2025-11-23 21:15:55.509023524 +0000 UTC m=+0.022390658 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:15:55 np0005532762 podman[243172]: 2025-11-23 21:15:55.609187359 +0000 UTC m=+0.122554483 container init 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 16:15:55 np0005532762 podman[243172]: 2025-11-23 21:15:55.615028844 +0000 UTC m=+0.128395948 container start 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 16:15:55 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : New worker (243235) forked
Nov 23 16:15:55 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : Loading success.
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.676 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.677 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.6761324, 4384cda9-2a35-4df4-84b1-a045a41852ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.677 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Started (Lifecycle Event)#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.679 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.682 230187 INFO nova.virt.libvirt.driver [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance spawned successfully.#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.682 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.704 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.708 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.711 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.711 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.712 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.713 230187 DEBUG nova.virt.libvirt.driver [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:15:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.6773012, 4384cda9-2a35-4df4-84b1-a045a41852ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.741 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.765 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.768 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932555.679522, 4384cda9-2a35-4df4-84b1-a045a41852ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.768 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.782 230187 INFO nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.782 230187 DEBUG nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.804 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.806 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.834 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.859 230187 INFO nova.compute.manager [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 11.27 seconds to build instance.#033[00m
Nov 23 16:15:55 np0005532762 nova_compute[230183]: 2025-11-23 21:15:55.877 230187 DEBUG oslo_concurrency.lockutils [None req-ddd9673f-1f1e-4def-be69-ce02870f8adc 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:15:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:15:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.550 230187 DEBUG nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG oslo_concurrency.lockutils [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 DEBUG nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:15:57 np0005532762 nova_compute[230183]: 2025-11-23 21:15:57.551 230187 WARNING nova.compute.manager [req-c04ff297-64bc-428e-a92d-4336cb4bea21 req-fbb8b70d-1fa2-48a7-b23c-980837237a34 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state active and task_state None.#033[00m
Nov 23 16:15:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:15:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:15:58 np0005532762 nova_compute[230183]: 2025-11-23 21:15:58.800 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:15:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:15:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:15:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:15:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:00 np0005532762 podman[243250]: 2025-11-23 21:16:00.672286694 +0000 UTC m=+0.069984429 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:16:00 np0005532762 podman[243249]: 2025-11-23 21:16:00.737071973 +0000 UTC m=+0.146304066 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 16:16:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:01.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:02 np0005532762 nova_compute[230183]: 2025-11-23 21:16:02.515 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.251 230187 DEBUG nova.compute.manager [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.251 230187 DEBUG nova.compute.manager [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.252 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:16:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:03.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:03 np0005532762 nova_compute[230183]: 2025-11-23 21:16:03.803 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.450 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.575 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.577 230187 DEBUG nova.network.neutron [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.597 230187 DEBUG oslo_concurrency.lockutils [req-f33e3d4d-1f97-4d25-a58f-8d1c5c26caf6 req-452c60b0-3cb0-49bd-a288-d22184256254 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:16:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.899 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.961 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:16:04 np0005532762 nova_compute[230183]: 2025-11-23 21:16:04.962 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:16:05 np0005532762 podman[243316]: 2025-11-23 21:16:05.006707328 +0000 UTC m=+0.063888577 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.101 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4707MB free_disk=59.92185592651367GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.103 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance 4384cda9-2a35-4df4-84b1-a045a41852ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.168 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.242 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4169750381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.700 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.706 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.722 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:16:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:05.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.749 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:16:05 np0005532762 nova_compute[230183]: 2025-11-23 21:16:05.750 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:06.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:06 np0005532762 nova_compute[230183]: 2025-11-23 21:16:06.749 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:07 np0005532762 nova_compute[230183]: 2025-11-23 21:16:07.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:07 np0005532762 nova_compute[230183]: 2025-11-23 21:16:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:07 np0005532762 nova_compute[230183]: 2025-11-23 21:16:07.518 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 16:16:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 23 16:16:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 16:16:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2402619270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.590 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.591 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.591 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:16:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:08 np0005532762 nova_compute[230183]: 2025-11-23 21:16:08.805 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:10 np0005532762 ovn_controller[132845]: 2025-11-23T21:16:10Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:cb:53 10.100.0.14
Nov 23 16:16:10 np0005532762 ovn_controller[132845]: 2025-11-23T21:16:10Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:cb:53 10.100.0.14
Nov 23 16:16:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:10.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.350 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.365 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.365 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.366 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:11 np0005532762 nova_compute[230183]: 2025-11-23 21:16:11.447 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:16:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:12 np0005532762 nova_compute[230183]: 2025-11-23 21:16:12.520 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:12.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:13 np0005532762 nova_compute[230183]: 2025-11-23 21:16:13.808 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:15.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:17 np0005532762 nova_compute[230183]: 2025-11-23 21:16:17.522 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:18 np0005532762 nova_compute[230183]: 2025-11-23 21:16:18.809 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:20 np0005532762 nova_compute[230183]: 2025-11-23 21:16:20.618 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:20 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:20.619 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:16:20 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:20.621 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:16:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:16:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:22 np0005532762 nova_compute[230183]: 2025-11-23 21:16:22.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:23 np0005532762 nova_compute[230183]: 2025-11-23 21:16:23.812 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.874 230187 DEBUG nova.compute.manager [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.875 230187 DEBUG nova.compute.manager [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing instance network info cache due to event network-changed-deb2e9cc-993f-4f9a-934e-0921fdf22170. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.875 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.876 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.876 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Refreshing network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.991 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.992 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.993 230187 INFO nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Terminating instance#033[00m
Nov 23 16:16:24 np0005532762 nova_compute[230183]: 2025-11-23 21:16:24.994 230187 DEBUG nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:16:25 np0005532762 kernel: tapdeb2e9cc-99 (unregistering): left promiscuous mode
Nov 23 16:16:25 np0005532762 NetworkManager[49021]: <info>  [1763932585.0426] device (tapdeb2e9cc-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:16:25 np0005532762 ovn_controller[132845]: 2025-11-23T21:16:25Z|00120|binding|INFO|Releasing lport deb2e9cc-993f-4f9a-934e-0921fdf22170 from this chassis (sb_readonly=0)
Nov 23 16:16:25 np0005532762 ovn_controller[132845]: 2025-11-23T21:16:25Z|00121|binding|INFO|Setting lport deb2e9cc-993f-4f9a-934e-0921fdf22170 down in Southbound
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.051 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 ovn_controller[132845]: 2025-11-23T21:16:25Z|00122|binding|INFO|Removing iface tapdeb2e9cc-99 ovn-installed in OVS
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.062 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cb:53 10.100.0.14'], port_security=['fa:16:3e:19:cb:53 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4384cda9-2a35-4df4-84b1-a045a41852ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b48e986-896c-496c-81ed-a29a0452333b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=831ed7cd-9739-4cae-9853-0a7c3c8eb72f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=deb2e9cc-993f-4f9a-934e-0921fdf22170) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.065 142158 INFO neutron.agent.ovn.metadata.agent [-] Port deb2e9cc-993f-4f9a-934e-0921fdf22170 in datapath 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 unbound from our chassis#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.066 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.068 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4d848597-89f0-4bf4-a4d0-8a9f5b196dd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.068 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 namespace which is not needed anymore#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.072 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 23 16:16:25 np0005532762 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Consumed 14.278s CPU time.
Nov 23 16:16:25 np0005532762 systemd-machined[193469]: Machine qemu-7-instance-0000000c terminated.
Nov 23 16:16:25 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : haproxy version is 2.8.14-c23fe91
Nov 23 16:16:25 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [NOTICE]   (243233) : path to executable is /usr/sbin/haproxy
Nov 23 16:16:25 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [WARNING]  (243233) : Exiting Master process...
Nov 23 16:16:25 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [ALERT]    (243233) : Current worker (243235) exited with code 143 (Terminated)
Nov 23 16:16:25 np0005532762 neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7[243223]: [WARNING]  (243233) : All workers exited. Exiting... (0)
Nov 23 16:16:25 np0005532762 systemd[1]: libpod-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope: Deactivated successfully.
Nov 23 16:16:25 np0005532762 podman[243499]: 2025-11-23 21:16:25.193983781 +0000 UTC m=+0.039970208 container died 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.211 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.215 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0-userdata-shm.mount: Deactivated successfully.
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.224 230187 INFO nova.virt.libvirt.driver [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Instance destroyed successfully.#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.224 230187 DEBUG nova.objects.instance [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid 4384cda9-2a35-4df4-84b1-a045a41852ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:16:25 np0005532762 systemd[1]: var-lib-containers-storage-overlay-18067dbd2039a8291e31d1524e9c7847c294eb0b15485a8b90bbade1b71fdea0-merged.mount: Deactivated successfully.
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.234 230187 DEBUG nova.virt.libvirt.vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855303843',display_name='tempest-TestNetworkBasicOps-server-1855303843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855303843',id=12,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAycEnFV4AnrY6tCOqSabQ0TZJ55Jf3TdrBRrViOQ4YjFLRSLQxmifTjYTiV91MZtamqBqC7Pgt4UqC3q5yq6gNP1UI71Vl55q0bshrNqJ4oe/KPbzHMTwu1zmJ8/r6BYA==',key_name='tempest-TestNetworkBasicOps-403372706',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:15:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-g8xsviwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:15:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=4384cda9-2a35-4df4-84b1-a045a41852ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.235 230187 DEBUG nova.network.os_vif_util [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.236 230187 DEBUG nova.network.os_vif_util [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.236 230187 DEBUG os_vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:16:25 np0005532762 podman[243499]: 2025-11-23 21:16:25.237594455 +0000 UTC m=+0.083580872 container cleanup 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.240 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.241 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdeb2e9cc-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.242 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 systemd[1]: libpod-conmon-5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0.scope: Deactivated successfully.
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.245 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.247 230187 INFO os_vif [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:cb:53,bridge_name='br-int',has_traffic_filtering=True,id=deb2e9cc-993f-4f9a-934e-0921fdf22170,network=Network(2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdeb2e9cc-99')#033[00m
Nov 23 16:16:25 np0005532762 podman[243538]: 2025-11-23 21:16:25.306959626 +0000 UTC m=+0.047819007 container remove 5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.312 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9834d19b-feda-4f8d-9813-4ba3f36e92fd]: (4, ('Sun Nov 23 09:16:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 (5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0)\n5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0\nSun Nov 23 09:16:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 (5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0)\n5f3d7e4271a37dd6a522057800b0dddfab00adc1fb3b8d8070ea2e4312e68ff0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.313 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[f9838921-041e-46de-a916-80e25be0de9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.314 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b2cbb2b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:16:25 np0005532762 kernel: tap2b2cbb2b-40: left promiscuous mode
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.316 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.332 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c185db4e-7e13-4c8e-a0ae-ca635342fc97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.355 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7e170269-1475-48a0-841b-ff5a5732ced8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.356 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa58569-b79f-4548-806d-8986e5db16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.376 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[eec2a321-4ac0-425a-bd69-f4d346ed71c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450357, 'reachable_time': 27697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243570, 'error': None, 'target': 'ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.379 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:16:25 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:25.379 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[5a95c1cd-43bc-45ba-9f13-3e3b149831bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:16:25 np0005532762 systemd[1]: run-netns-ovnmeta\x2d2b2cbb2b\x2d4635\x2d48f6\x2d97b3\x2db4c96d1d06f7.mount: Deactivated successfully.
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.656 230187 INFO nova.virt.libvirt.driver [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deleting instance files /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac_del#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.657 230187 INFO nova.virt.libvirt.driver [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deletion of /var/lib/nova/instances/4384cda9-2a35-4df4-84b1-a045a41852ac_del complete#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.715 230187 INFO nova.compute.manager [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG oslo.service.loopingcall [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:16:25 np0005532762 nova_compute[230183]: 2025-11-23 21:16:25.716 230187 DEBUG nova.network.neutron [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:16:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.652 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updated VIF entry in instance network info cache for port deb2e9cc-993f-4f9a-934e-0921fdf22170. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.653 230187 DEBUG nova.network.neutron [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [{"id": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "address": "fa:16:3e:19:cb:53", "network": {"id": "2b2cbb2b-4635-48f6-97b3-b4c96d1d06f7", "bridge": "br-int", "label": "tempest-network-smoke--943664961", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdeb2e9cc-99", "ovs_interfaceid": "deb2e9cc-993f-4f9a-934e-0921fdf22170", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.673 230187 DEBUG oslo_concurrency.lockutils [req-d949908f-d638-426d-a6d8-4c3482289077 req-61db7f0f-ce0b-4efc-bc36-f52e62bd738c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-4384cda9-2a35-4df4-84b1-a045a41852ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:16:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:16:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.798 230187 DEBUG nova.network.neutron [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.813 230187 INFO nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Took 1.10 seconds to deallocate network for instance.#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.864 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.865 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.939 230187 DEBUG oslo_concurrency.processutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.982 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.983 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.984 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.984 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.985 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.985 230187 WARNING nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-unplugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.986 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.986 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.987 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.987 230187 DEBUG oslo_concurrency.lockutils [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.988 230187 DEBUG nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] No waiting events found dispatching network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:16:26 np0005532762 nova_compute[230183]: 2025-11-23 21:16:26.988 230187 WARNING nova.compute.manager [req-0c74657f-17ba-429c-a875-fde7ed452122 req-e25fcdd5-0cad-452c-96fa-99704bd09416 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received unexpected event network-vif-plugged-deb2e9cc-993f-4f9a-934e-0921fdf22170 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.170 230187 DEBUG nova.compute.manager [req-4a7c358d-d791-4ea6-be6f-44bc64ca7be7 req-48b25b98-4a28-484e-9b9f-3b3e91394d1f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Received event network-vif-deleted-deb2e9cc-993f-4f9a-934e-0921fdf22170 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:27 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/318558322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.469 230187 DEBUG oslo_concurrency.processutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.479 230187 DEBUG nova.compute.provider_tree [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.510 230187 DEBUG nova.scheduler.client.report [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.536 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.562 230187 INFO nova.scheduler.client.report [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance 4384cda9-2a35-4df4-84b1-a045a41852ac#033[00m
Nov 23 16:16:27 np0005532762 nova_compute[230183]: 2025-11-23 21:16:27.653 230187 DEBUG oslo_concurrency.lockutils [None req-1ec63843-3750-43cd-a63f-e8aced9f571f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "4384cda9-2a35-4df4-84b1-a045a41852ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.003000078s ======
Nov 23 16:16:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 23 16:16:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:28.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:30 np0005532762 nova_compute[230183]: 2025-11-23 21:16:30.244 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:30 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:30.623 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:16:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:31 np0005532762 podman[243648]: 2025-11-23 21:16:31.650295887 +0000 UTC m=+0.052020190 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:16:31 np0005532762 podman[243647]: 2025-11-23 21:16:31.669713885 +0000 UTC m=+0.083032268 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 16:16:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:32 np0005532762 nova_compute[230183]: 2025-11-23 21:16:32.588 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:34.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:35 np0005532762 nova_compute[230183]: 2025-11-23 21:16:35.247 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:35 np0005532762 podman[243695]: 2025-11-23 21:16:35.648557687 +0000 UTC m=+0.061847802 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 16:16:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:36 np0005532762 nova_compute[230183]: 2025-11-23 21:16:36.166 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:36 np0005532762 nova_compute[230183]: 2025-11-23 21:16:36.198 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:36.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:37 np0005532762 nova_compute[230183]: 2025-11-23 21:16:37.635 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:40 np0005532762 nova_compute[230183]: 2025-11-23 21:16:40.224 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932585.222552, 4384cda9-2a35-4df4-84b1-a045a41852ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:16:40 np0005532762 nova_compute[230183]: 2025-11-23 21:16:40.224 230187 INFO nova.compute.manager [-] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:16:40 np0005532762 nova_compute[230183]: 2025-11-23 21:16:40.250 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:40 np0005532762 nova_compute[230183]: 2025-11-23 21:16:40.268 230187 DEBUG nova.compute.manager [None req-03a0665d-db7e-4407-948d-c3a7c632607f - - - - - -] [instance: 4384cda9-2a35-4df4-84b1-a045a41852ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:16:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:42 np0005532762 nova_compute[230183]: 2025-11-23 21:16:42.686 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:42.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:43.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:45 np0005532762 nova_compute[230183]: 2025-11-23 21:16:45.253 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:45.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:47 np0005532762 nova_compute[230183]: 2025-11-23 21:16:47.689 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:47.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:49.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:50 np0005532762 nova_compute[230183]: 2025-11-23 21:16:50.256 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:50.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.074 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:16:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:52 np0005532762 nova_compute[230183]: 2025-11-23 21:16:52.692 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:52.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:53.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.468 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.468 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.486 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.554 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.555 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.562 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.562 230187 INFO nova.compute.claims [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Nov 23 16:16:54 np0005532762 nova_compute[230183]: 2025-11-23 21:16:54.669 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:54.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:16:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2342618095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.130 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.140 230187 DEBUG nova.compute.provider_tree [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.155 230187 DEBUG nova.scheduler.client.report [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.176 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.177 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.226 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.227 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.259 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.263 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.282 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.366 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.367 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.367 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating image(s)#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.388 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.413 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.435 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.438 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.459 230187 DEBUG nova.policy [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9fb5352c62684f2ba3a326a953a10dfe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '782593db60784ab8bff41fe87d72ff5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.513 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.514 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.514 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.515 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "bbf6854ee7b640c267652b783cf7d20bc820aa56" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.538 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.541 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.830 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bbf6854ee7b640c267652b783cf7d20bc820aa56 f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:16:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:55.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:55 np0005532762 nova_compute[230183]: 2025-11-23 21:16:55.899 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] resizing rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.037 230187 DEBUG nova.objects.instance [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'migration_context' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.055 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.055 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Ensure instance console log exists: /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.056 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.056 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.057 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:16:56 np0005532762 nova_compute[230183]: 2025-11-23 21:16:56.303 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Successfully created port: 984010df-e5b5-45c2-9db5-f0046f5efd50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 16:16:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:16:57 np0005532762 nova_compute[230183]: 2025-11-23 21:16:57.694 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:16:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:16:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:16:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.832 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Successfully updated port: 984010df-e5b5-45c2-9db5-f0046f5efd50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 16:16:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:16:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:16:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:16:59.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.849 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.850 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.850 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.961 230187 DEBUG nova.compute.manager [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.962 230187 DEBUG nova.compute.manager [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:16:59 np0005532762 nova_compute[230183]: 2025-11-23 21:16:59.962 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:17:00 np0005532762 nova_compute[230183]: 2025-11-23 21:17:00.025 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 16:17:00 np0005532762 nova_compute[230183]: 2025-11-23 21:17:00.261 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:00.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.513 230187 DEBUG nova.network.neutron [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.533 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.533 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance network_info: |[{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.534 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.535 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.540 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start _get_guest_xml network_info=[{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': '3c45fa6c-8a99-4359-a34e-d89f4e1e77d0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.547 230187 WARNING nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.554 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.555 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.558 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.libvirt.host [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.559 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T21:05:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='56044b93-2979-48aa-b67f-c37e1b489306',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T21:05:43Z,direct_url=<?>,disk_format='qcow2',id=3c45fa6c-8a99-4359-a34e-d89f4e1e77d0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3f8fb5175f85402ba20cf9c6989d47cf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T21:05:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.560 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.561 230187 DEBUG nova.virt.hardware [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 16:17:01 np0005532762 nova_compute[230183]: 2025-11-23 21:17:01.564 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:01.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:17:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2602542080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.019 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.056 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.060 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 16:17:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29360451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.495 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.497 230187 DEBUG nova.virt.libvirt.vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:16:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.498 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.499 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.500 230187 DEBUG nova.objects.instance [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'pci_devices' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.523 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] End _get_guest_xml xml=<domain type="kvm">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <uuid>f638f2b4-bdf0-46c2-81d0-143511a01fb5</uuid>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <name>instance-0000000d</name>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <memory>131072</memory>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <vcpu>1</vcpu>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <metadata>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:name>tempest-TestNetworkBasicOps-server-793817431</nova:name>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:creationTime>2025-11-23 21:17:01</nova:creationTime>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:flavor name="m1.nano">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:memory>128</nova:memory>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:disk>1</nova:disk>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:swap>0</nova:swap>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:vcpus>1</nova:vcpus>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </nova:flavor>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:owner>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:user uuid="9fb5352c62684f2ba3a326a953a10dfe">tempest-TestNetworkBasicOps-1975357669-project-member</nova:user>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:project uuid="782593db60784ab8bff41fe87d72ff5f">tempest-TestNetworkBasicOps-1975357669</nova:project>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </nova:owner>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:root type="image" uuid="3c45fa6c-8a99-4359-a34e-d89f4e1e77d0"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <nova:ports>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <nova:port uuid="984010df-e5b5-45c2-9db5-f0046f5efd50">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        </nova:port>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </nova:ports>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </nova:instance>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </metadata>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <sysinfo type="smbios">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <system>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="manufacturer">RDO</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="product">OpenStack Compute</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="serial">f638f2b4-bdf0-46c2-81d0-143511a01fb5</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="uuid">f638f2b4-bdf0-46c2-81d0-143511a01fb5</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <entry name="family">Virtual Machine</entry>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </system>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </sysinfo>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <os>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <boot dev="hd"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <smbios mode="sysinfo"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </os>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <features>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <acpi/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <apic/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <vmcoreinfo/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </features>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <clock offset="utc">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <timer name="hpet" present="no"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </clock>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <cpu mode="host-model" match="exact">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </cpu>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  <devices>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <disk type="network" device="disk">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <target dev="vda" bus="virtio"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <disk type="network" device="cdrom">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <driver type="raw" cache="none"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <source protocol="rbd" name="vms/f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.100" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.102" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <host name="192.168.122.101" port="6789"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </source>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <auth username="openstack">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:        <secret type="ceph" uuid="03808be8-ae4a-5548-82e6-4a294f1bc627"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      </auth>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <target dev="sda" bus="sata"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </disk>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <interface type="ethernet">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <mac address="fa:16:3e:63:db:14"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <mtu size="1442"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <target dev="tap984010df-e5"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </interface>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <serial type="pty">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <log file="/var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/console.log" append="off"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </serial>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <video>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <model type="virtio"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </video>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <input type="tablet" bus="usb"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <rng model="virtio">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <backend model="random">/dev/urandom</backend>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </rng>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <controller type="usb" index="0"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    <memballoon model="virtio">
Nov 23 16:17:02 np0005532762 nova_compute[230183]:      <stats period="10"/>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:    </memballoon>
Nov 23 16:17:02 np0005532762 nova_compute[230183]:  </devices>
Nov 23 16:17:02 np0005532762 nova_compute[230183]: </domain>
Nov 23 16:17:02 np0005532762 nova_compute[230183]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Preparing to wait for external event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.525 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.526 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.526 230187 DEBUG nova.virt.libvirt.vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T21:16:55Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.527 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.527 230187 DEBUG nova.network.os_vif_util [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.528 230187 DEBUG os_vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.528 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.529 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.529 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.531 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.531 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap984010df-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.532 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap984010df-e5, col_values=(('external_ids', {'iface-id': '984010df-e5b5-45c2-9db5-f0046f5efd50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:db:14', 'vm-uuid': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:02 np0005532762 NetworkManager[49021]: <info>  [1763932622.5339] manager: (tap984010df-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.533 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.536 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.539 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.539 230187 INFO os_vif [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5')#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.595 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.595 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.596 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] No VIF found with MAC fa:16:3e:63:db:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.596 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Using config drive#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.628 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:17:02 np0005532762 podman[244010]: 2025-11-23 21:17:02.657111179 +0000 UTC m=+0.066743032 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:17:02 np0005532762 podman[244009]: 2025-11-23 21:17:02.661912808 +0000 UTC m=+0.085341609 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.697 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.987 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Creating config drive at /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config#033[00m
Nov 23 16:17:02 np0005532762 nova_compute[230183]: 2025-11-23 21:17:02.998 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom06q4f4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.127 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom06q4f4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.170 230187 DEBUG nova.storage.rbd_utils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] rbd image f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.174 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.327 230187 DEBUG oslo_concurrency.processutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config f638f2b4-bdf0-46c2-81d0-143511a01fb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.328 230187 INFO nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deleting local config drive /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5/disk.config because it was imported into RBD.#033[00m
Nov 23 16:17:03 np0005532762 kernel: tap984010df-e5: entered promiscuous mode
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.3794] manager: (tap984010df-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.379 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:03Z|00123|binding|INFO|Claiming lport 984010df-e5b5-45c2-9db5-f0046f5efd50 for this chassis.
Nov 23 16:17:03 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:03Z|00124|binding|INFO|984010df-e5b5-45c2-9db5-f0046f5efd50: Claiming fa:16:3e:63:db:14 10.100.0.10
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.382 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.384 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.389 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.398 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:db:14 10.100.0.10'], port_security=['fa:16:3e:63:db:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f4166e-7bc0-4981-9683-ade606fa5710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba908e3d-1310-4719-83e3-3b0a3d387de5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84c02252-eea5-46a3-9f52-20439e666f31, chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=984010df-e5b5-45c2-9db5-f0046f5efd50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.399 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 984010df-e5b5-45c2-9db5-f0046f5efd50 in datapath 45f4166e-7bc0-4981-9683-ade606fa5710 bound to our chassis#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.400 142158 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f4166e-7bc0-4981-9683-ade606fa5710#033[00m
Nov 23 16:17:03 np0005532762 systemd-udevd[244120]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.411 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f4ccc-3081-4b1e-a028-d067bd036273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.411 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f4166e-71 in ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 16:17:03 np0005532762 systemd-machined[193469]: New machine qemu-8-instance-0000000d.
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f4166e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9a6bf3-2ef2-4269-9a10-7557351a2f14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.415 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[fc522c9f-cf8f-4d2f-ad3a-d8efdc25c3a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.4245] device (tap984010df-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.4252] device (tap984010df-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.428 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[d97cbbbd-c896-43c6-9e61-4c5365510f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 systemd[1]: Started Virtual Machine qemu-8-instance-0000000d.
Nov 23 16:17:03 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:03Z|00125|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 ovn-installed in OVS
Nov 23 16:17:03 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:03Z|00126|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 up in Southbound
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.452 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[e321efab-7377-4ad8-bb01-e7c3b42ebc2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.453 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.484 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[de409230-f16b-4b1e-b218-be81c04863e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.489 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0eadbd97-31c7-4dc2-bb23-425d7533a2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.4904] manager: (tap45f4166e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Nov 23 16:17:03 np0005532762 systemd-udevd[244123]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.517 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[9422e344-0cda-4423-8cde-4158d614e8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.520 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7fe107-1bec-4538-accf-5435c6197dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.5424] device (tap45f4166e-70): carrier: link connected
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.547 233916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e617df6-2320-47d9-a85d-b8c60c13ed51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.550 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.551 230187 DEBUG nova.network.neutron [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.563 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[548ea005-dfad-400d-954b-6f94a81a1f1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f4166e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:a8:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457211, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244152, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.566 230187 DEBUG oslo_concurrency.lockutils [req-c0394ebe-1cec-485c-a6f2-0060a90b62ed req-bea66f36-f6d9-476a-ad06-c9549ba34201 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.577 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8e205f-5d61-463a-958d-e1ea444cc3c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:a874'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457211, 'tstamp': 457211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244153, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.595 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ce28e9-9bd1-4b1f-acd5-2f6c827eadbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f4166e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:a8:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457211, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244154, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.621 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[187f315e-c52a-45b0-9bfa-94321ce5b526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.668 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8d9be4-3a95-4899-ac27-ee439073cd64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.669 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f4166e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.670 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.670 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f4166e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.671 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 NetworkManager[49021]: <info>  [1763932623.6724] manager: (tap45f4166e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 23 16:17:03 np0005532762 kernel: tap45f4166e-70: entered promiscuous mode
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.674 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.675 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f4166e-70, col_values=(('external_ids', {'iface-id': '4d2b4219-31d6-45aa-9e4b-1dde83c9be1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.676 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:03Z|00127|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.701 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.702 142158 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.703 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfd96de-b5e9-497c-960d-48542e1dec0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.703 142158 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: global
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    log         /dev/log local0 debug
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    log-tag     haproxy-metadata-proxy-45f4166e-7bc0-4981-9683-ade606fa5710
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    user        root
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    group       root
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    maxconn     1024
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    pidfile     /var/lib/neutron/external/pids/45f4166e-7bc0-4981-9683-ade606fa5710.pid.haproxy
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    daemon
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: defaults
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    log global
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    mode http
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    option httplog
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    option dontlognull
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    option http-server-close
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    option forwardfor
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    retries                 3
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    timeout http-request    30s
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    timeout connect         30s
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    timeout client          32s
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    timeout server          32s
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    timeout http-keep-alive 30s
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: listen listener
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    bind 169.254.169.254:80
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]:    http-request add-header X-OVN-Network-ID 45f4166e-7bc0-4981-9683-ade606fa5710
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 16:17:03 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:03.704 142158 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'env', 'PROCESS_TAG=haproxy-45f4166e-7bc0-4981-9683-ade606fa5710', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f4166e-7bc0-4981-9683-ade606fa5710.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 16:17:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:03.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.855 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932623.8549895, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.856 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Started (Lifecycle Event)#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.871 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.874 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932623.8551562, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.875 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Paused (Lifecycle Event)#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.892 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.895 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:17:03 np0005532762 nova_compute[230183]: 2025-11-23 21:17:03.912 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:17:04 np0005532762 podman[244229]: 2025-11-23 21:17:04.026968047 +0000 UTC m=+0.043616465 container create d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 16:17:04 np0005532762 systemd[1]: Started libpod-conmon-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope.
Nov 23 16:17:04 np0005532762 systemd[1]: Started libcrun container.
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.081 230187 DEBUG nova.compute.manager [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG oslo_concurrency.lockutils [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.082 230187 DEBUG nova.compute.manager [req-4df6a19b-52e0-4be5-9a35-3c404dd9f4b4 req-ec2592de-e81f-44d5-8776-779a52d8ae4c 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Processing event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.083 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 16:17:04 np0005532762 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c03b0f65c36e2efcc867601f87a616208b57ce73396437f2aca52a4ea44641ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.086 230187 DEBUG nova.virt.driver [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] Emitting event <LifecycleEvent: 1763932624.0864298, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.087 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Resumed (Lifecycle Event)#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.088 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.091 230187 INFO nova.virt.libvirt.driver [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance spawned successfully.#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.091 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 16:17:04 np0005532762 podman[244229]: 2025-11-23 21:17:04.003406148 +0000 UTC m=+0.020054586 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 23 16:17:04 np0005532762 podman[244229]: 2025-11-23 21:17:04.101209439 +0000 UTC m=+0.117857887 container init d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 16:17:04 np0005532762 podman[244229]: 2025-11-23 21:17:04.106052227 +0000 UTC m=+0.122700665 container start d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.108 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.114 230187 DEBUG nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.118 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.118 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.119 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.120 230187 DEBUG nova.virt.libvirt.driver [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 16:17:04 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : New worker (244249) forked
Nov 23 16:17:04 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : Loading success.
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.148 230187 INFO nova.compute.manager [None req-e7bbcc45-5f8e-432a-b6ac-13b660a0b391 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.178 230187 INFO nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 8.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.178 230187 DEBUG nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.235 230187 INFO nova.compute.manager [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 9.71 seconds to build instance.#033[00m
Nov 23 16:17:04 np0005532762 nova_compute[230183]: 2025-11-23 21:17:04.251 230187 DEBUG oslo_concurrency.lockutils [None req-ab6e9eb4-3e08-4853-a2b8-04ac6af27211 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:05 np0005532762 nova_compute[230183]: 2025-11-23 21:17:05.445 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:05 np0005532762 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:05 np0005532762 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:05 np0005532762 nova_compute[230183]: 2025-11-23 21:17:05.446 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:17:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:05.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.184 230187 DEBUG nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.184 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.185 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.185 230187 DEBUG oslo_concurrency.lockutils [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.186 230187 DEBUG nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.186 230187 WARNING nova.compute.manager [req-a184cfbb-3f6e-442f-b5ed-316c9e458ad9 req-4a7dfad3-8087-45ab-a942-1e9b44f61e5f 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state active and task_state None.#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.450 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.476 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.476 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.477 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:06 np0005532762 podman[244260]: 2025-11-23 21:17:06.639701772 +0000 UTC m=+0.057948319 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 16:17:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:17:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3363301861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.898 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.994 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:17:06 np0005532762 nova_compute[230183]: 2025-11-23 21:17:06.995 230187 DEBUG nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.160 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.162 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4743MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.162 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.163 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.328 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Instance f638f2b4-bdf0-46c2-81d0-143511a01fb5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.329 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.329 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:17:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.431 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.535 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.535 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.538 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.555 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.579 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.627 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:07 np0005532762 nova_compute[230183]: 2025-11-23 21:17:07.699 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:17:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2477343784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.088 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.093 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.108 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.130 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.130 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.131 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.131 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:17:08 np0005532762 nova_compute[230183]: 2025-11-23 21:17:08.143 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:17:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:08.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:09.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.120 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.121 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.122 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.666 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.667 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.668 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 16:17:10 np0005532762 nova_compute[230183]: 2025-11-23 21:17:10.669 230187 DEBUG nova.objects.instance [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lazy-loading 'info_cache' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:17:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:11 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:11Z|00128|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:11 np0005532762 NetworkManager[49021]: <info>  [1763932631.0485] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Nov 23 16:17:11 np0005532762 NetworkManager[49021]: <info>  [1763932631.0491] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.063 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:11 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:11Z|00129|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.083 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.089 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.599 230187 DEBUG nova.compute.manager [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.600 230187 DEBUG nova.compute.manager [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:17:11 np0005532762 nova_compute[230183]: 2025-11-23 21:17:11.600 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:17:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:12 np0005532762 nova_compute[230183]: 2025-11-23 21:17:12.540 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:12 np0005532762 nova_compute[230183]: 2025-11-23 21:17:12.701 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:12.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.110 230187 DEBUG nova.network.neutron [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.132 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.133 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:17:13 np0005532762 nova_compute[230183]: 2025-11-23 21:17:13.134 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:17:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:13.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:14 np0005532762 nova_compute[230183]: 2025-11-23 21:17:14.145 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:17:14 np0005532762 nova_compute[230183]: 2025-11-23 21:17:14.145 230187 DEBUG nova.network.neutron [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:14 np0005532762 nova_compute[230183]: 2025-11-23 21:17:14.156 230187 DEBUG oslo_concurrency.lockutils [req-6c1f1583-7138-4c3b-8fd9-faa660d1ca20 req-f6298826-f633-45c0-bc0d-6ad2cef5b4f4 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:17:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:17 np0005532762 nova_compute[230183]: 2025-11-23 21:17:17.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:17 np0005532762 nova_compute[230183]: 2025-11-23 21:17:17.703 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:18 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:18Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:db:14 10.100.0.10
Nov 23 16:17:18 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:18Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:db:14 10.100.0.10
Nov 23 16:17:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:22 np0005532762 nova_compute[230183]: 2025-11-23 21:17:22.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:22 np0005532762 nova_compute[230183]: 2025-11-23 21:17:22.763 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:22.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:23 np0005532762 nova_compute[230183]: 2025-11-23 21:17:23.966 230187 INFO nova.compute.manager [None req-57d5e31a-c96e-4edd-9c12-886709572b81 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output#033[00m
Nov 23 16:17:23 np0005532762 nova_compute[230183]: 2025-11-23 21:17:23.972 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:17:24 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:24Z|00130|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:24 np0005532762 nova_compute[230183]: 2025-11-23 21:17:24.741 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:24 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:24Z|00131|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:24 np0005532762 nova_compute[230183]: 2025-11-23 21:17:24.837 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:25 np0005532762 nova_compute[230183]: 2025-11-23 21:17:25.967 230187 INFO nova.compute.manager [None req-97e4cc55-5bb6-4a78-9dc9-232dc0466ff5 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output#033[00m
Nov 23 16:17:25 np0005532762 nova_compute[230183]: 2025-11-23 21:17:25.971 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:17:26 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:17:26 np0005532762 nova_compute[230183]: 2025-11-23 21:17:26.742 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:26 np0005532762 NetworkManager[49021]: <info>  [1763932646.7435] manager: (patch-br-int-to-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 23 16:17:26 np0005532762 NetworkManager[49021]: <info>  [1763932646.7444] manager: (patch-provnet-ce139dcc-0def-41ea-bc8f-4f8d9359e223-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 23 16:17:26 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:26Z|00132|binding|INFO|Releasing lport 4d2b4219-31d6-45aa-9e4b-1dde83c9be1c from this chassis (sb_readonly=0)
Nov 23 16:17:26 np0005532762 nova_compute[230183]: 2025-11-23 21:17:26.795 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:26 np0005532762 nova_compute[230183]: 2025-11-23 21:17:26.801 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:27 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:27.028 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:17:27 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:27.030 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.066 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.106 230187 INFO nova.compute.manager [None req-5f8abde0-b8c2-4923-b872-2bd8b2d7a13f 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Get console output#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.110 234120 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 23 16:17:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:17:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:27 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:17:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.765 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.982 230187 DEBUG nova.compute.manager [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.982 230187 DEBUG nova.compute.manager [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing instance network info cache due to event network-changed-984010df-e5b5-45c2-9db5-f0046f5efd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquired lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 16:17:27 np0005532762 nova_compute[230183]: 2025-11-23 21:17:27.983 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Refreshing network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.042 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.043 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.045 230187 INFO nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Terminating instance#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.046 230187 DEBUG nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 16:17:28 np0005532762 kernel: tap984010df-e5 (unregistering): left promiscuous mode
Nov 23 16:17:28 np0005532762 NetworkManager[49021]: <info>  [1763932648.1083] device (tap984010df-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 23 16:17:28 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:28Z|00133|binding|INFO|Releasing lport 984010df-e5b5-45c2-9db5-f0046f5efd50 from this chassis (sb_readonly=0)
Nov 23 16:17:28 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:28Z|00134|binding|INFO|Setting lport 984010df-e5b5-45c2-9db5-f0046f5efd50 down in Southbound
Nov 23 16:17:28 np0005532762 ovn_controller[132845]: 2025-11-23T21:17:28Z|00135|binding|INFO|Removing iface tap984010df-e5 ovn-installed in OVS
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.155 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.161 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:db:14 10.100.0.10'], port_security=['fa:16:3e:63:db:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f638f2b4-bdf0-46c2-81d0-143511a01fb5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f4166e-7bc0-4981-9683-ade606fa5710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '782593db60784ab8bff41fe87d72ff5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba908e3d-1310-4719-83e3-3b0a3d387de5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84c02252-eea5-46a3-9f52-20439e666f31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>], logical_port=984010df-e5b5-45c2-9db5-f0046f5efd50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f53b9661b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.162 142158 INFO neutron.agent.ovn.metadata.agent [-] Port 984010df-e5b5-45c2-9db5-f0046f5efd50 in datapath 45f4166e-7bc0-4981-9683-ade606fa5710 unbound from our chassis#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.163 142158 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f4166e-7bc0-4981-9683-ade606fa5710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.164 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0f0084-28ec-4f28-ba2c-986a610fa243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.164 142158 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 namespace which is not needed anymore#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.167 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532762 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Consumed 13.268s CPU time.
Nov 23 16:17:28 np0005532762 systemd-machined[193469]: Machine qemu-8-instance-0000000d terminated.
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.279 230187 INFO nova.virt.libvirt.driver [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Instance destroyed successfully.#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.280 230187 DEBUG nova.objects.instance [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lazy-loading 'resources' on Instance uuid f638f2b4-bdf0-46c2-81d0-143511a01fb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 16:17:28 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : haproxy version is 2.8.14-c23fe91
Nov 23 16:17:28 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [NOTICE]   (244247) : path to executable is /usr/sbin/haproxy
Nov 23 16:17:28 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [ALERT]    (244247) : Current worker (244249) exited with code 143 (Terminated)
Nov 23 16:17:28 np0005532762 neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710[244243]: [WARNING]  (244247) : All workers exited. Exiting... (0)
Nov 23 16:17:28 np0005532762 systemd[1]: libpod-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.288 230187 DEBUG nova.virt.libvirt.vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T21:16:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-793817431',display_name='tempest-TestNetworkBasicOps-server-793817431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-793817431',id=13,image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyCHalitTYHY+i3k7NGtIz/axejAHzuAlVnR4e5KMHIjAE7Fj+3ovJsaUKuZw9NPKsJ0qVqgikm8FkvL2Pu0+xYGcJBA97J85NKDWDS+eoNhScnnixkt+4uoxHyqB5n7A==',key_name='tempest-TestNetworkBasicOps-1599562746',keypairs=<?>,launch_index=0,launched_at=2025-11-23T21:17:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='782593db60784ab8bff41fe87d72ff5f',ramdisk_id='',reservation_id='r-gf1xk21n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3c45fa6c-8a99-4359-a34e-d89f4e1e77d0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1975357669',owner_user_name='tempest-TestNetworkBasicOps-1975357669-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T21:17:04Z,user_data=None,user_id='9fb5352c62684f2ba3a326a953a10dfe',uuid=f638f2b4-bdf0-46c2-81d0-143511a01fb5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.289 230187 DEBUG nova.network.os_vif_util [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converting VIF {"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.290 230187 DEBUG nova.network.os_vif_util [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.291 230187 DEBUG os_vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.292 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.292 230187 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap984010df-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 podman[244491]: 2025-11-23 21:17:28.297126356 +0000 UTC m=+0.050676313 container died d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.298 230187 INFO os_vif [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:db:14,bridge_name='br-int',has_traffic_filtering=True,id=984010df-e5b5-45c2-9db5-f0046f5efd50,network=Network(45f4166e-7bc0-4981-9683-ade606fa5710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap984010df-e5')#033[00m
Nov 23 16:17:28 np0005532762 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80-userdata-shm.mount: Deactivated successfully.
Nov 23 16:17:28 np0005532762 systemd[1]: var-lib-containers-storage-overlay-c03b0f65c36e2efcc867601f87a616208b57ce73396437f2aca52a4ea44641ae-merged.mount: Deactivated successfully.
Nov 23 16:17:28 np0005532762 podman[244491]: 2025-11-23 21:17:28.340431002 +0000 UTC m=+0.093980939 container cleanup d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:17:28 np0005532762 systemd[1]: libpod-conmon-d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80.scope: Deactivated successfully.
Nov 23 16:17:28 np0005532762 podman[244552]: 2025-11-23 21:17:28.398297657 +0000 UTC m=+0.038689445 container remove d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.403 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c4851907-0083-4365-ba9f-797e11ad3902]: (4, ('Sun Nov 23 09:17:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 (d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80)\nd1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80\nSun Nov 23 09:17:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 (d1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80)\nd1ab6f23c0d2d6ae14bb359c26fddd412e6a6447e012895ce26f36865930fd80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.405 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4d052e-c3f0-4ff9-bce8-cb836caed1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.406 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f4166e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.407 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 kernel: tap45f4166e-70: left promiscuous mode
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.423 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2f7e0a-a01b-4728-ba7b-7c74436acc2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.447 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb33155-3deb-41f1-b182-8b24c95c5622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.448 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[0266d81f-0793-4dfd-a02b-260ddaea32eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.462 233901 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f7b0a8-cf22-41f1-9917-8d8e03e1d1c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457205, 'reachable_time': 37074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244565, 'error': None, 'target': 'ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.464 142272 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f4166e-7bc0-4981-9683-ade606fa5710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 16:17:28 np0005532762 systemd[1]: run-netns-ovnmeta\x2d45f4166e\x2d7bc0\x2d4981\x2d9683\x2dade606fa5710.mount: Deactivated successfully.
Nov 23 16:17:28 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:28.464 142272 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6f2fce-8524-4d55-b3db-0082dd6addd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.733 230187 INFO nova.virt.libvirt.driver [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deleting instance files /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5_del#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.734 230187 INFO nova.virt.libvirt.driver [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deletion of /var/lib/nova/instances/f638f2b4-bdf0-46c2-81d0-143511a01fb5_del complete#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 INFO nova.compute.manager [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG oslo.service.loopingcall [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 16:17:28 np0005532762 nova_compute[230183]: 2025-11-23 21:17:28.792 230187 DEBUG nova.network.neutron [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 16:17:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:29 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:29.031 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.217 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updated VIF entry in instance network info cache for port 984010df-e5b5-45c2-9db5-f0046f5efd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.218 230187 DEBUG nova.network.neutron [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [{"id": "984010df-e5b5-45c2-9db5-f0046f5efd50", "address": "fa:16:3e:63:db:14", "network": {"id": "45f4166e-7bc0-4981-9683-ade606fa5710", "bridge": "br-int", "label": "tempest-network-smoke--1927222341", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "782593db60784ab8bff41fe87d72ff5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap984010df-e5", "ovs_interfaceid": "984010df-e5b5-45c2-9db5-f0046f5efd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.241 230187 DEBUG oslo_concurrency.lockutils [req-1908fb14-e346-4176-89c5-04dbbbe37355 req-f45f3cff-fc0a-44f9-a4b7-5f1c9e56d068 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Releasing lock "refresh_cache-f638f2b4-bdf0-46c2-81d0-143511a01fb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.455 230187 DEBUG nova.network.neutron [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.468 230187 INFO nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.505 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.505 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:29 np0005532762 nova_compute[230183]: 2025-11-23 21:17:29.558 230187 DEBUG oslo_concurrency.processutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:17:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:30 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:17:30 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3313607607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.060 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.061 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.061 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 WARNING nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-unplugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.062 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Acquiring lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG oslo_concurrency.lockutils [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] No waiting events found dispatching network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 WARNING nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received unexpected event network-vif-plugged-984010df-e5b5-45c2-9db5-f0046f5efd50 for instance with vm_state deleted and task_state None.#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.063 230187 DEBUG nova.compute.manager [req-6a37eb5a-a1c1-44de-81d2-a23f9b2ac978 req-9af17b8e-dbe0-401a-879a-a642ae8eed74 30cdb2d57472403e887bb57c5aa3b413 a3a7655dfdb941daaf17f7f95d16950a - - default default] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Received event network-vif-deleted-984010df-e5b5-45c2-9db5-f0046f5efd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.064 230187 DEBUG oslo_concurrency.processutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.071 230187 DEBUG nova.compute.provider_tree [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.083 230187 DEBUG nova.scheduler.client.report [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.097 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.119 230187 INFO nova.scheduler.client.report [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Deleted allocations for instance f638f2b4-bdf0-46c2-81d0-143511a01fb5#033[00m
Nov 23 16:17:30 np0005532762 nova_compute[230183]: 2025-11-23 21:17:30.170 230187 DEBUG oslo_concurrency.lockutils [None req-c54c3f09-1df2-4a26-8817-f60a3671d950 9fb5352c62684f2ba3a326a953a10dfe 782593db60784ab8bff41fe87d72ff5f - - default default] Lock "f638f2b4-bdf0-46c2-81d0-143511a01fb5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:31.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:32 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:32 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:17:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:32 np0005532762 nova_compute[230183]: 2025-11-23 21:17:32.806 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:33 np0005532762 nova_compute[230183]: 2025-11-23 21:17:33.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:33 np0005532762 podman[244617]: 2025-11-23 21:17:33.648887617 +0000 UTC m=+0.059270332 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:17:33 np0005532762 podman[244616]: 2025-11-23 21:17:33.6879296 +0000 UTC m=+0.099103597 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 16:17:33 np0005532762 nova_compute[230183]: 2025-11-23 21:17:33.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:33 np0005532762 nova_compute[230183]: 2025-11-23 21:17:33.812 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:36.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:37 np0005532762 podman[244664]: 2025-11-23 21:17:37.660361361 +0000 UTC m=+0.062357876 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:17:38 np0005532762 nova_compute[230183]: 2025-11-23 21:17:37.833 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:38 np0005532762 nova_compute[230183]: 2025-11-23 21:17:38.295 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:41.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:42 np0005532762 nova_compute[230183]: 2025-11-23 21:17:42.866 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:43 np0005532762 nova_compute[230183]: 2025-11-23 21:17:43.277 230187 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763932648.2768478, f638f2b4-bdf0-46c2-81d0-143511a01fb5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 16:17:43 np0005532762 nova_compute[230183]: 2025-11-23 21:17:43.278 230187 INFO nova.compute.manager [-] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] VM Stopped (Lifecycle Event)#033[00m
Nov 23 16:17:43 np0005532762 nova_compute[230183]: 2025-11-23 21:17:43.301 230187 DEBUG nova.compute.manager [None req-14be77be-a56f-4a8c-896f-97bbe55e44e2 - - - - - -] [instance: f638f2b4-bdf0-46c2-81d0-143511a01fb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 16:17:43 np0005532762 nova_compute[230183]: 2025-11-23 21:17:43.306 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:43.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:44.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:17:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:17:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:46.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:47 np0005532762 nova_compute[230183]: 2025-11-23 21:17:47.868 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:48 np0005532762 nova_compute[230183]: 2025-11-23 21:17:48.308 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:48.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:50.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.075 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:17:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:17:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:17:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:17:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:52 np0005532762 nova_compute[230183]: 2025-11-23 21:17:52.873 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:52.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:53 np0005532762 nova_compute[230183]: 2025-11-23 21:17:53.310 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:53.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:55.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:56.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:17:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:17:57 np0005532762 nova_compute[230183]: 2025-11-23 21:17:57.874 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:57.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:58 np0005532762 nova_compute[230183]: 2025-11-23 21:17:58.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:17:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:17:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:17:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:17:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:17:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:17:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:17:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:00.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:01.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:02 np0005532762 nova_compute[230183]: 2025-11-23 21:18:02.875 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:02.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:03 np0005532762 nova_compute[230183]: 2025-11-23 21:18:03.313 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.576031) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683576078, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2366, "num_deletes": 251, "total_data_size": 6370831, "memory_usage": 6457888, "flush_reason": "Manual Compaction"}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683602686, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4091898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31373, "largest_seqno": 33734, "table_properties": {"data_size": 4082334, "index_size": 6058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19859, "raw_average_key_size": 20, "raw_value_size": 4063205, "raw_average_value_size": 4184, "num_data_blocks": 261, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932480, "oldest_key_time": 1763932480, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26733 microseconds, and 14584 cpu microseconds.
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.602757) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4091898 bytes OK
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.602790) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604608) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604632) EVENT_LOG_v1 {"time_micros": 1763932683604625, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.604657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6360402, prev total WAL file size 6360402, number of live WAL files 2.
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.607267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3995KB)], [60(12MB)]
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683607331, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16732366, "oldest_snapshot_seqno": -1}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6234 keys, 14606260 bytes, temperature: kUnknown
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683724503, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14606260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14565042, "index_size": 24532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159795, "raw_average_key_size": 25, "raw_value_size": 14453297, "raw_average_value_size": 2318, "num_data_blocks": 987, "num_entries": 6234, "num_filter_entries": 6234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932683, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.724757) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14606260 bytes
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.725986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.1 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6755, records dropped: 521 output_compression: NoCompression
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.726012) EVENT_LOG_v1 {"time_micros": 1763932683726001, "job": 36, "event": "compaction_finished", "compaction_time_micros": 117241, "compaction_time_cpu_micros": 48731, "output_level": 6, "num_output_files": 1, "total_output_size": 14606260, "num_input_records": 6755, "num_output_records": 6234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683727071, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932683729407, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.607121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:18:03.729513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:18:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:04 np0005532762 ovn_controller[132845]: 2025-11-23T21:18:04Z|00136|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory
Nov 23 16:18:04 np0005532762 podman[244728]: 2025-11-23 21:18:04.671228395 +0000 UTC m=+0.071110789 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 16:18:04 np0005532762 podman[244727]: 2025-11-23 21:18:04.693250493 +0000 UTC m=+0.108195499 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 16:18:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:04.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:05 np0005532762 nova_compute[230183]: 2025-11-23 21:18:05.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:06 np0005532762 nova_compute[230183]: 2025-11-23 21:18:06.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.456 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.877 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:18:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4074893024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:18:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:07 np0005532762 nova_compute[230183]: 2025-11-23 21:18:07.956 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:18:08 np0005532762 podman[244822]: 2025-11-23 21:18:08.06088922 +0000 UTC m=+0.062611323 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.113 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4944MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.114 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.181 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.181 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.204 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.315 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:18:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3638365196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.637 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.642 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.655 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.673 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:18:08 np0005532762 nova_compute[230183]: 2025-11-23 21:18:08.673 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:18:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:18:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.675 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.676 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.692 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.693 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:11 np0005532762 nova_compute[230183]: 2025-11-23 21:18:11.693 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:18:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:12 np0005532762 nova_compute[230183]: 2025-11-23 21:18:12.880 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:12.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:13 np0005532762 nova_compute[230183]: 2025-11-23 21:18:13.317 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:15 np0005532762 nova_compute[230183]: 2025-11-23 21:18:15.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:18:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:16.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:17 np0005532762 nova_compute[230183]: 2025-11-23 21:18:17.883 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:17.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:18 np0005532762 nova_compute[230183]: 2025-11-23 21:18:18.319 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:18.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:22 np0005532762 nova_compute[230183]: 2025-11-23 21:18:22.885 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:23.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:23 np0005532762 nova_compute[230183]: 2025-11-23 21:18:23.321 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:27 np0005532762 nova_compute[230183]: 2025-11-23 21:18:27.939 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:28 np0005532762 systemd-logind[793]: New session 55 of user zuul.
Nov 23 16:18:28 np0005532762 systemd[1]: Started Session 55 of User zuul.
Nov 23 16:18:28 np0005532762 nova_compute[230183]: 2025-11-23 21:18:28.323 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:31.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:31 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:18:31 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3061329609' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:18:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:31.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:32 np0005532762 nova_compute[230183]: 2025-11-23 21:18:32.978 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:18:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:33 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:18:33 np0005532762 nova_compute[230183]: 2025-11-23 21:18:33.326 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:33.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:35 np0005532762 podman[245328]: 2025-11-23 21:18:35.642695083 +0000 UTC m=+0.052878902 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 16:18:35 np0005532762 podman[245327]: 2025-11-23 21:18:35.670631819 +0000 UTC m=+0.080600333 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 16:18:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:37 np0005532762 ovs-vsctl[245424]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 16:18:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:37 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:37 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:18:37 np0005532762 nova_compute[230183]: 2025-11-23 21:18:37.977 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:38 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 16:18:38 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 16:18:38 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:18:38 np0005532762 nova_compute[230183]: 2025-11-23 21:18:38.328 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:38 np0005532762 podman[245639]: 2025-11-23 21:18:38.668622558 +0000 UTC m=+0.066716432 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 16:18:38 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: cache status {prefix=cache status} (starting...)
Nov 23 16:18:38 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:38 np0005532762 lvm[245758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:18:38 np0005532762 lvm[245758]: VG ceph_vg0 finished
Nov 23 16:18:38 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: client ls {prefix=client ls} (starting...)
Nov 23 16:18:38 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:39.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:39 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:18:39 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2288206649' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 16:18:39 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/664271478' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2649457878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: ops {prefix=ops} (starting...)
Nov 23 16:18:40 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 16:18:40 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/544923333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 16:18:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/724445792' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2661117439' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:18:41 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: session ls {prefix=session ls} (starting...)
Nov 23 16:18:41 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:18:41 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: status {prefix=status} (starting...)
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 16:18:41 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036111529' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 16:18:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2193134433' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190622379' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2375060761' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2923257280' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:18:42 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1796376712' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:18:42 np0005532762 nova_compute[230183]: 2025-11-23 21:18:42.982 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:43 np0005532762 nova_compute[230183]: 2025-11-23 21:18:43.330 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1143614854' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3636856428' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:18:43 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3023136414' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:18:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:44 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 16:18:44 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3945838404' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 16:18:44 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:18:44 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/898839678' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:18:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2696775381' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.667137146s of 13.731669426s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3383296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 3375104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 3375104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 3366912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 3366912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3358720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 3350528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 3350528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3342336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3342336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3334144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3334144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3145728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3145728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935c00 session 0x55805cd1d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9800 session 0x55805a7e63c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980878 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 3072000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 103.327316284s of 103.344772339s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981010 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979828 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 3006464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.843829155s of 14.855600357s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 2998272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 2981888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 2981888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 2973696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 2973696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805cf8c5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b2a52c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 2965504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 2949120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 2949120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 2940928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 2940928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 2932736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 2924544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 2924544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979696 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.296117783s of 16.299619675s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 2916352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 2908160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 2899968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979828 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 2899968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 2891776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 2891776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 2883584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 2883584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 2875392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 2867200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 2867200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 2859008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 2859008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.764848709s of 16.771341324s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 2850816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 2842624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 2842624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 2834432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 2826240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 2826240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2818048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2818048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 2809856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 2801664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24d000 session 0x55805d8bf0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d92b680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 2793472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 2793472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 2785280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 2777088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 2777088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 2768896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981208 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 2760704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 2760704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.872358322s of 27.248651505s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2752512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981340 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 2744320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 2744320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1687552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1687552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982852 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1679360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1671168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982852 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098936081s of 12.109712601s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1622016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1622016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1613824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1605632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1605632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1589248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1589248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1662976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1654784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1646592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1638400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1630208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8328 writes, 34K keys, 8328 commit groups, 1.0 writes per commit group, ingest: 21.45 MB, 0.04 MB/s#012Interval WAL: 8328 writes, 1694 syncs, 4.92 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1572864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1564672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1564672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1556480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1548288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1548288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1540096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1531904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1531904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1523712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1523712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1515520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1507328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1499136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1490944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1482752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1482752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1474560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1474560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1466368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1458176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1458176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 1449984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1441792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1441792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1433600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1433600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1425408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1425408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1417216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1417216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1400832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1392640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1384448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1384448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1368064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1368064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1359872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1359872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1351680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1343488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1343488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1335296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1318912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1318912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cf8d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cfb2f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1302528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1286144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1277952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1277952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982129 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 104.574203491s of 104.905845642s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1269760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1261568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1261568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982261 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1253376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1245184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1245184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983773 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1236992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1236992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1228800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.095993042s of 12.122361183s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1228800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1220608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983182 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1220608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1212416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1212416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1204224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1196032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1196032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1187840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1187840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1179648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1171456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1171456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1163264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1155072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1155072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1146880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1146880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1138688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1130496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1130496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1122304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1122304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8be000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1114112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1105920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1105920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.643814087s of 38.652179718s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 1056768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1949696 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983050 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.975281239s of 10.038576126s, submitted: 382
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d5663c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9800 session 0x55805d8bf680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983182 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805a6730e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805a67af00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984562 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984562 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.571740150s of 14.652972221s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984694 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984826 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.419490814s of 10.425830841s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1810432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986206 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805cd7d680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985615 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985483 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.834005356s of 14.852039337s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985615 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987127 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.555835724s of 14.712920189s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d3f1c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24d000 session 0x55805cfb2000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986404 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986404 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.893251419s of 12.898416519s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805b7d63c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805c4554a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986536 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.695014000s of 15.724806786s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986077 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cf8c000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 1802240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.657769203s of 15.741616249s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985813 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985813 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985945 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.062019348s of 16.124525070s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987457 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc7cb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c000 session 0x55805cd75680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987325 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.114776611s of 33.127079010s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987457 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988969 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.612201691s of 10.619262695s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988837 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c452000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.244842529s of 58.250808716s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988378 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989890 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.113847733s of 16.138629913s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805c633860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.863085747s of 13.866735458s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990811 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.973569870s of 14.986274719s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cc7e5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805a7e6b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8adc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.993670464s of 10.996788025s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991864 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.335764885s of 11.355053902s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992785 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805c452b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805d25f0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.579681396s of 43.622188568s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995677 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.178874969s of 11.839152336s, submitted: 5
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cd7cb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d565c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.315622330s of 29.319524765s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994495 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d64ac00 session 0x55805cd1c960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999031 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.826273918s of 13.858831406s, submitted: 6
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805d564f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805b7d72c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.386011124s of 20.388910294s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805a67a960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805a7e5860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997258 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.066018105s of 12.072667122s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.506420135s of 22.516693115s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b434b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805d3f05a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.324840546s of 47.329608917s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.818011284s of 13.826013565s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cc80b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cc7c1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.236343384s of 33.268554688s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001203 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.904835701s of 10.988478661s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d862f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805cc7cb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.852489471s of 18.009616852s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000612 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.888288498s of 13.898387909s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d564f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b7d74a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887771606s of 14.891558647s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,2])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999370 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 1425408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 65536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.794444084s of 14.003565788s, submitted: 364
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805c455c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000810 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.896261215s of 16.902111053s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002454 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.281729698s of 15.294480324s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c455a40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c7f0000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805c7f10e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.912832260s of 30.916051865s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.676693916s of 14.759789467s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805cc7eb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.021399498s of 11.037414551s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004164 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007320 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.964635849s of 10.006252289s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc80780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f4d20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.184024811s of 46.203655243s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d5652c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c7ef0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.059599876s of 12.068504333s, submitted: 2
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 142 handle_osd_map epochs [143,144], i have 142, src has [1,144]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.084339142s of 10.095458031s, submitted: 3
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016839 data_alloc: 218103808 data_used: 266240
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 2088960 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 145 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4aeb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2031616 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5dc000/0x0/0x4ffc00000, data 0x170fbd/0x22e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 18694144 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 146 ms_handle_reset con 0x55805b24c400 session 0x55805b69cf00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134726 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137156 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.137514114s of 14.367403030s, submitted: 61
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805a9f9000 session 0x55805d4afe00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.764553070s of 15.769596100s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136316 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 18620416 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139340 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805c4534a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d650800 session 0x55805d3f14a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d651c00 session 0x55805d4721e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.763429642s of 17.799776077s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d740000 session 0x55805b7d6b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d64a800 session 0x55805d3f05a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138769 data_alloc: 218103808 data_used: 278528
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x11771be/0x123a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 18579456 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805d92ad20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d651c00 session 0x55805d8bfa40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d740000 session 0x55805a6734a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805e2f1c00 session 0x55805cfb3a40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805b24c400 session 0x55805a7e7680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268233 data_alloc: 218103808 data_used: 278528
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805cc7cd20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.449963570s of 10.389714241s, submitted: 77
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 21430272 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 20733952 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365949 data_alloc: 234881024 data_used: 14716928
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367067 data_alloc: 234881024 data_used: 14716928
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x207333d/0x2139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367675 data_alloc: 234881024 data_used: 14733312
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.585947990s of 12.599705696s, submitted: 21
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114442240 unmapped: 876544 heap: 115318784 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8928000/0x0/0x4ffc00000, data 0x2c7e33d/0x2d44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,8])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 1097728 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88fe000/0x0/0x4ffc00000, data 0x2ca733d/0x2d6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475763 data_alloc: 234881024 data_used: 16691200
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f3000/0x0/0x4ffc00000, data 0x2cb333d/0x2d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475931 data_alloc: 234881024 data_used: 16703488
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 3506176 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.262884140s of 14.072373390s, submitted: 141
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805cc7c1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805cc80000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476259 data_alloc: 234881024 data_used: 16764928
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.868956566s of 10.927964211s, submitted: 6
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9400 session 0x55805c7f03c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd7c1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805d92a5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805d92bc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e7e00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c62000/0x0/0x4ffc00000, data 0x394339f/0x3a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d4af4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd1d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1570244 data_alloc: 234881024 data_used: 16769024
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805cc80f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805a7e5860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 15720448 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 14475264 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 7249920 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.360017776s of 17.521881104s, submitted: 38
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 6299648 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6c19000/0x0/0x4ffc00000, data 0x498a3d2/0x4a53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 7643136 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796399 data_alloc: 234881024 data_used: 26443776
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796687 data_alloc: 234881024 data_used: 26435584
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805a6730e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e6b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.716887474s of 10.034674644s, submitted: 127
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805d8be3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851c000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492452 data_alloc: 234881024 data_used: 12238848
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851b000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d651c00 session 0x55805d25e000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d740000 session 0x55805cf8d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ee000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805ac0a000 session 0x55805c454b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb2f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b69d2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.443714142s of 18.708480835s, submitted: 87
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181066 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd7c3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805cd7cf00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1c00 session 0x55805b7f5c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b7f5a40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207078 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa062000/0x0/0x4ffc00000, data 0x15452db/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7f43c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b2a5a40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1400 session 0x55805b2a4b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.920416832s of 12.955449104s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b2a4960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211460 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.594253540s of 12.610000610s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 31424512 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a24000/0x0/0x4ffc00000, data 0x17722eb/0x1838000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a07000/0x0/0x4ffc00000, data 0x178f2eb/0x1855000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264450 data_alloc: 218103808 data_used: 4370432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x17972eb/0x185d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263650 data_alloc: 218103808 data_used: 4370432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fd000/0x0/0x4ffc00000, data 0x17992eb/0x185f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.573128700s of 12.650348663s, submitted: 29
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd74000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.208628654s of 13.212368965s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b7f43c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1000 session 0x55805cfb2f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a7e6b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.695281982s of 20.782047272s, submitted: 14
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805d25e5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9400 session 0x55805a673860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a672960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8be780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cfb34a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805a7e74a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805b435c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 34578432 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92b0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d92a5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b69d2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b69cb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d567c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d567680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d566780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 34570240 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.721254349s of 10.977932930s, submitted: 86
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d5663c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9c00 session 0x55805d25ef00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d25fe00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d56d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d56de00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241073 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d92a5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af4000/0x0/0x4ffc00000, data 0x16a22eb/0x1768000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9800 session 0x55805a7e6b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a7e74a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242887 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.606376648s of 16.673978806s, submitted: 14
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28213248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 27590656 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368463 data_alloc: 218103808 data_used: 6819840
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362511 data_alloc: 218103808 data_used: 6823936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666546822s of 14.050541878s, submitted: 114
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f909e000/0x0/0x4ffc00000, data 0x20f72fb/0x21be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362735 data_alloc: 218103808 data_used: 6823936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805b434780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d92ab40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb000 session 0x55805cd745a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805cd74f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8bfe00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396522 data_alloc: 218103808 data_used: 6823936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 27639808 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8be5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8bf2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb400 session 0x55805d8bf860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050792694s of 10.149922371s, submitted: 31
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398336 data_alloc: 218103808 data_used: 6823936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 27525120 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 25681920 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 25649152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425240 data_alloc: 234881024 data_used: 10756096
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425048 data_alloc: 234881024 data_used: 10756096
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.226175308s of 12.244213104s, submitted: 6
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8b1f000/0x0/0x4ffc00000, data 0x267436d/0x273d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 22151168 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 22839296 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494334 data_alloc: 234881024 data_used: 11829248
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84ce000/0x0/0x4ffc00000, data 0x2cbd36d/0x2d86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488982 data_alloc: 234881024 data_used: 11833344
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4f00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c7f0960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.527749062s of 10.052300453s, submitted: 105
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s#012Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84d3000/0x0/0x4ffc00000, data 0x2cc036d/0x2d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d92af00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371249 data_alloc: 218103808 data_used: 6823936
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d8be3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7d7860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cbd000/0x0/0x4ffc00000, data 0x20f92fb/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4721e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.834026337s of 34.967418671s, submitted: 43
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d6780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8ac3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8ad680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233960 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.569715500s of 17.644144058s, submitted: 15
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 28090368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335881 data_alloc: 218103808 data_used: 1634304
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 26198016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91fb000/0x0/0x4ffc00000, data 0x1f9c2db/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350955 data_alloc: 218103808 data_used: 2863104
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d56d680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56de00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d25f860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.964529037s of 25.611534119s, submitted: 88
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d25ef00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d567c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d566780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d5663c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69d2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437857 data_alloc: 218103808 data_used: 2867200
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d4afe00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 33161216 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 27746304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 25223168 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 25190400 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.176643372s of 17.280221939s, submitted: 17
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7989000/0x0/0x4ffc00000, data 0x380d2eb/0x38d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a935800 session 0x55805d56c780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f21e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.359371185s of 28.531023026s, submitted: 61
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 24297472 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1617122 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 24215552 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 24076288 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 22880256 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.529578209s of 15.271432877s, submitted: 389
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 23912448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d566000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.931664467s of 11.936676025s, submitted: 1
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d863e00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c453e00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360030 data_alloc: 218103808 data_used: 2863104
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d3f14a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d4afc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d862d20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c6323c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d4ae960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.715570450s of 28.907997131s, submitted: 67
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 32841728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69cb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d4afa40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d3f10e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805a673860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d8623c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e88000/0x0/0x4ffc00000, data 0x130f2db/0x13d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d5641e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252852 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805cfb2000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805b7f4960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258466 data_alloc: 218103808 data_used: 815104
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805cd752c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805c3fcb40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d61e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.481660843s of 18.543272018s, submitted: 18
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805cd7d2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56de00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d56cf00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c452000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805c452780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f994c000/0x0/0x4ffc00000, data 0x184b2db/0x1910000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d3f0d20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 33071104 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292449 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9928000/0x0/0x4ffc00000, data 0x186f2db/0x1934000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 33005568 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d5650e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da400 session 0x55805d8adc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a673860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: mgrc ms_handle_reset ms_handle_reset con 0x55805cfc4c00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: mgrc handle_mgr_configure stats_period=5
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f8400 session 0x55805d863860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d64ac00 session 0x55805b4350e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d25fa40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d25e000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ada40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0400 session 0x55805d92bc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.006832123s of 37.314971924s, submitted: 21
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ac3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d56c5a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d65dc20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d3f1680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d8bf2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277031 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d74a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805cd745a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc80b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc812c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 35921920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278845 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 36241408 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.161880493s of 19.237621307s, submitted: 18
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 31727616 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 31547392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361745 data_alloc: 218103808 data_used: 1740800
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356265 data_alloc: 218103808 data_used: 1740800
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356569 data_alloc: 218103808 data_used: 1748992
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.781532288s of 14.450411797s, submitted: 111
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805b69d4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d25e1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 31842304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b2a52c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9443000/0x0/0x4ffc00000, data 0x1d522fb/0x1e19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d4730e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7fe00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc7f2c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cc7ed20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.686355591s of 23.753026962s, submitted: 20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a673680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805a6734a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 38707200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc810e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc000 session 0x55805d3f1c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d3f01e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a7e63c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.395429611s of 17.444917679s, submitted: 6
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a7e6d20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805a7e65a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fcc00 session 0x55805cd752c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4aef00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805c452780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 37216256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 35168256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 35995648 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475407 data_alloc: 218103808 data_used: 7741440
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d25fa40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d4723c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483725 data_alloc: 218103808 data_used: 7733248
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd000 session 0x55805d65d680
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cd74960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 35282944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123011072 unmapped: 32497664 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 25288704 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.900854111s of 21.241012573s, submitted: 73
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 21102592 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,16,0,27])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 22020096 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629540 data_alloc: 234881024 data_used: 19615744
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 21970944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f44000/0x0/0x4ffc00000, data 0x325130e/0x3318000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635894 data_alloc: 234881024 data_used: 19615744
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255509377s of 12.832665443s, submitted: 75
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d8be1e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d8bf0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92b0e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416071 data_alloc: 218103808 data_used: 7733248
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d472b40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d5652c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 34758656 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843377113s of 10.034677505s, submitted: 59
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d565860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9302000/0x0/0x4ffc00000, data 0x1e902db/0x1f55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a99b4a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a99ba40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92a960
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92af00
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.899166107s of 25.910942078s, submitted: 4
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d92ba40
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ac3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805b7f4780
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd400 session 0x55805cfb30e0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb23c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cfb3c20
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cfb3860
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805c7f0000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd800 session 0x55805c7f12c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 35086336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.807069778s of 19.842288971s, submitted: 9
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 31932416 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c7f05a0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cd7c3c0
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.674288750s of 12.870928764s, submitted: 67
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7e000
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 34152448 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 34668544 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 34684928 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:18:45 np0005532762 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2141623699' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:18:45 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079711421' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:18:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:18:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:46.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:18:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:18:46 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1895009701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:18:46 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 16:18:46 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2726670865' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 16:18:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 16:18:47 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1319081198' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 16:18:47 np0005532762 nova_compute[230183]: 2025-11-23 21:18:47.984 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:48.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3684318383' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 16:18:48 np0005532762 nova_compute[230183]: 2025-11-23 21:18:48.331 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2607410095' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 16:18:48 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3220501528' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:49.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096267481' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/210989180' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/746869594' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/715654558' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 16:18:49 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3864745210' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 16:18:49 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 16:18:50 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 16:18:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:50.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1508255163' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620366759' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3524373913' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 16:18:50 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1660088284' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 16:18:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.076 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:18:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:18:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:18:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263217182' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068717160' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174909781' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 16:18:51 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584137550' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 16:18:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:52.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 16:18:52 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864661257' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 16:18:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 16:18:52 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3388997346' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 16:18:53 np0005532762 nova_compute[230183]: 2025-11-23 21:18:53.021 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:53.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:53 np0005532762 nova_compute[230183]: 2025-11-23 21:18:53.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 16:18:53 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1817915753' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 16:18:53 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:18:53 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2836468749' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:54 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 16:18:54 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1210590168' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 16:18:54 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:54 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:55.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1583926911' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:18:55 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982132384' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 23 16:18:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3369457675' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 23 16:18:56 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1872188886' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 23 16:18:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:18:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:57.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:18:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:18:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 23 16:18:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3894689480' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 23 16:18:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:18:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:18:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:18:58 np0005532762 nova_compute[230183]: 2025-11-23 21:18:58.059 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 23 16:18:58 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1628049275' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 23 16:18:58 np0005532762 nova_compute[230183]: 2025-11-23 21:18:58.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:18:58 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 23 16:18:58 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/247555263' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 23 16:18:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:18:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:18:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:18:59.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:18:59 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 23 16:18:59 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3893524027' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 23 16:19:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:01 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 23 16:19:01 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1478691887' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 23 16:19:01 np0005532762 ovs-appctl[249574]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:01 np0005532762 ovs-appctl[249585]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:01 np0005532762 ovs-appctl[249596]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 23 16:19:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:02.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 23 16:19:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1255782784' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 23 16:19:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 23 16:19:02 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/743998604' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 23 16:19:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:03 np0005532762 nova_compute[230183]: 2025-11-23 21:19:03.059 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:03 np0005532762 nova_compute[230183]: 2025-11-23 21:19:03.337 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:03 np0005532762 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 16:19:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3870850034' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1747467823' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 23 16:19:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4014633627' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:19:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/273507409' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:06.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618491881' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2441055417' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 23 16:19:06 np0005532762 podman[251039]: 2025-11-23 21:19:06.643855009 +0000 UTC m=+0.055227431 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:19:06 np0005532762 podman[251038]: 2025-11-23 21:19:06.678902972 +0000 UTC m=+0.089874024 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 23 16:19:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2856170711' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:07.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 23 16:19:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/784096012' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 23 16:19:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.457 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:19:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:19:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051085304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:19:07 np0005532762 nova_compute[230183]: 2025-11-23 21:19:07.908 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:19:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.057 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.058 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4728MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.059 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.059 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/102875721' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.109 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.340 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.386 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.387 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.414 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1363249802' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:08 np0005532762 podman[251282]: 2025-11-23 21:19:08.768573148 +0000 UTC m=+0.066869412 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:19:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2917113464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.871 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.878 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.890 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.892 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:19:08 np0005532762 nova_compute[230183]: 2025-11-23 21:19:08.892 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 23 16:19:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1353647905' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 16:19:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 23 16:19:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2424626875' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 23 16:19:10 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1708152719' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 23 16:19:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 23 16:19:10 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1026958516' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:11 np0005532762 nova_compute[230183]: 2025-11-23 21:19:11.891 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:11 np0005532762 nova_compute[230183]: 2025-11-23 21:19:11.891 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:11 np0005532762 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:11 np0005532762 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:11 np0005532762 nova_compute[230183]: 2025-11-23 21:19:11.892 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:19:11 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:19:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:19:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3374333239' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:19:12 np0005532762 systemd[1]: Starting Time & Date Service...
Nov 23 16:19:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:12 np0005532762 nova_compute[230183]: 2025-11-23 21:19:12.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:12 np0005532762 nova_compute[230183]: 2025-11-23 21:19:12.425 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:19:12 np0005532762 nova_compute[230183]: 2025-11-23 21:19:12.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:19:12 np0005532762 nova_compute[230183]: 2025-11-23 21:19:12.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:19:12 np0005532762 nova_compute[230183]: 2025-11-23 21:19:12.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:19:12 np0005532762 systemd[1]: Started Time & Date Service.
Nov 23 16:19:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 23 16:19:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773746850' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 23 16:19:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:13.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:13 np0005532762 nova_compute[230183]: 2025-11-23 21:19:13.110 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:13 np0005532762 nova_compute[230183]: 2025-11-23 21:19:13.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 23 16:19:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/739475637' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 23 16:19:14 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2520495412' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 23 16:19:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:15.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:18 np0005532762 nova_compute[230183]: 2025-11-23 21:19:18.149 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:18 np0005532762 nova_compute[230183]: 2025-11-23 21:19:18.344 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:21.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:23.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:23 np0005532762 nova_compute[230183]: 2025-11-23 21:19:23.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:23 np0005532762 nova_compute[230183]: 2025-11-23 21:19:23.346 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:25.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:27.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:28.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:28 np0005532762 nova_compute[230183]: 2025-11-23 21:19:28.151 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:28 np0005532762 nova_compute[230183]: 2025-11-23 21:19:28.349 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:29.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:30.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:33 np0005532762 nova_compute[230183]: 2025-11-23 21:19:33.205 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:33 np0005532762 nova_compute[230183]: 2025-11-23 21:19:33.350 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:34.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:36.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:37.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:37 np0005532762 podman[252153]: 2025-11-23 21:19:37.796109219 +0000 UTC m=+0.051717478 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:19:37 np0005532762 podman[252151]: 2025-11-23 21:19:37.854066902 +0000 UTC m=+0.109230010 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 23 16:19:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:38.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:38 np0005532762 nova_compute[230183]: 2025-11-23 21:19:38.205 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:38 np0005532762 nova_compute[230183]: 2025-11-23 21:19:38.352 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:19:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:38 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:19:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:39 np0005532762 podman[252250]: 2025-11-23 21:19:39.638713085 +0000 UTC m=+0.053471405 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 23 16:19:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:40.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:19:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:19:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:42.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:42 np0005532762 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 16:19:42 np0005532762 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 16:19:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:43.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:43 np0005532762 nova_compute[230183]: 2025-11-23 21:19:43.207 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:43 np0005532762 nova_compute[230183]: 2025-11-23 21:19:43.354 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:43 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:19:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:44.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:45.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:48.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:48 np0005532762 nova_compute[230183]: 2025-11-23 21:19:48.209 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:48 np0005532762 nova_compute[230183]: 2025-11-23 21:19:48.355 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.077 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:19:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:19:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:19:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:19:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:19:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:19:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:52.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:53.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:53 np0005532762 nova_compute[230183]: 2025-11-23 21:19:53.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:53 np0005532762 nova_compute[230183]: 2025-11-23 21:19:53.356 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:54.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:55.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:56.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:19:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:57.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:19:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:19:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:19:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:19:58 np0005532762 nova_compute[230183]: 2025-11-23 21:19:58.212 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:58 np0005532762 nova_compute[230183]: 2025-11-23 21:19:58.357 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:19:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:19:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:19:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:19:59.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:00 np0005532762 ceph-mon[80135]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532762 ceph-mon[80135]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 23 16:20:00 np0005532762 ceph-mon[80135]:    daemon nfs.cephfs.0.0.compute-1.fuxuha on compute-1 is in error state
Nov 23 16:20:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:01.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:02.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:03.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:03 np0005532762 nova_compute[230183]: 2025-11-23 21:20:03.267 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:03 np0005532762 nova_compute[230183]: 2025-11-23 21:20:03.358 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:04.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:04 np0005532762 systemd[1]: session-55.scope: Deactivated successfully.
Nov 23 16:20:04 np0005532762 systemd[1]: session-55.scope: Consumed 2min 51.602s CPU time, 751.3M memory peak, read 284.1M from disk, written 65.2M to disk.
Nov 23 16:20:04 np0005532762 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Nov 23 16:20:04 np0005532762 systemd-logind[793]: Removed session 55.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: New session 56 of user zuul.
Nov 23 16:20:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:05.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:05 np0005532762 systemd[1]: Started Session 56 of User zuul.
Nov 23 16:20:05 np0005532762 systemd[1]: session-56.scope: Deactivated successfully.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: Session 56 logged out. Waiting for processes to exit.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: Removed session 56.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: New session 57 of user zuul.
Nov 23 16:20:05 np0005532762 systemd[1]: Started Session 57 of User zuul.
Nov 23 16:20:05 np0005532762 systemd[1]: session-57.scope: Deactivated successfully.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: Session 57 logged out. Waiting for processes to exit.
Nov 23 16:20:05 np0005532762 systemd-logind[793]: Removed session 57.
Nov 23 16:20:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:06.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:07.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.449 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.450 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:20:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:20:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/280190509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:20:07 np0005532762 nova_compute[230183]: 2025-11-23 21:20:07.871 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:20:08 np0005532762 podman[252445]: 2025-11-23 21:20:08.041037948 +0000 UTC m=+0.057511333 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.059 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.060 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4833MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.061 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.061 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:08 np0005532762 podman[252444]: 2025-11-23 21:20:08.070575774 +0000 UTC m=+0.089151395 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.129 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.130 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:20:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:08.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.155 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.266 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.359 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:20:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/515802534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.585 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.590 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.612 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.614 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:20:08 np0005532762 nova_compute[230183]: 2025-11-23 21:20:08.614 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:09 np0005532762 nova_compute[230183]: 2025-11-23 21:20:09.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:10.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:10 np0005532762 podman[252513]: 2025-11-23 21:20:10.631344775 +0000 UTC m=+0.047523296 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 16:20:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:11 np0005532762 nova_compute[230183]: 2025-11-23 21:20:11.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:12 np0005532762 nova_compute[230183]: 2025-11-23 21:20:12.441 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:13 np0005532762 nova_compute[230183]: 2025-11-23 21:20:13.306 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:13 np0005532762 nova_compute[230183]: 2025-11-23 21:20:13.361 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:13 np0005532762 nova_compute[230183]: 2025-11-23 21:20:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:13 np0005532762 nova_compute[230183]: 2025-11-23 21:20:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:13 np0005532762 nova_compute[230183]: 2025-11-23 21:20:13.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:20:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:15 np0005532762 nova_compute[230183]: 2025-11-23 21:20:15.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:20:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:16.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:18 np0005532762 nova_compute[230183]: 2025-11-23 21:20:18.308 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:18 np0005532762 nova_compute[230183]: 2025-11-23 21:20:18.363 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:22.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:23 np0005532762 nova_compute[230183]: 2025-11-23 21:20:23.311 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:23 np0005532762 nova_compute[230183]: 2025-11-23 21:20:23.364 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:28 np0005532762 nova_compute[230183]: 2025-11-23 21:20:28.313 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:28 np0005532762 nova_compute[230183]: 2025-11-23 21:20:28.365 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:29.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:33 np0005532762 nova_compute[230183]: 2025-11-23 21:20:33.358 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:33 np0005532762 nova_compute[230183]: 2025-11-23 21:20:33.367 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:20:35 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3842 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2001 writes, 6920 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 6.50 MB, 0.01 MB/s#012Interval WAL: 2001 writes, 838 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:20:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:36.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:38 np0005532762 nova_compute[230183]: 2025-11-23 21:20:38.540 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:38 np0005532762 podman[252574]: 2025-11-23 21:20:38.640989937 +0000 UTC m=+0.052235623 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:20:38 np0005532762 podman[252573]: 2025-11-23 21:20:38.66404089 +0000 UTC m=+0.082985370 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:20:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:39.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:40.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:41 np0005532762 podman[252619]: 2025-11-23 21:20:41.668384133 +0000 UTC m=+0.070033146 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 23 16:20:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:43 np0005532762 nova_compute[230183]: 2025-11-23 21:20:43.362 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:43 np0005532762 nova_compute[230183]: 2025-11-23 21:20:43.543 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:20:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.751805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845751972, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2635, "num_deletes": 508, "total_data_size": 5368939, "memory_usage": 5451584, "flush_reason": "Manual Compaction"}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845783845, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 3492192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33739, "largest_seqno": 36369, "table_properties": {"data_size": 3481003, "index_size": 6403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 30131, "raw_average_key_size": 20, "raw_value_size": 3455012, "raw_average_value_size": 2350, "num_data_blocks": 274, "num_entries": 1470, "num_filter_entries": 1470, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932684, "oldest_key_time": 1763932684, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 32108 microseconds, and 14448 cpu microseconds.
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.783933) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 3492192 bytes OK
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.783952) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785917) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785932) EVENT_LOG_v1 {"time_micros": 1763932845785927, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.785950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 5355374, prev total WAL file size 5355374, number of live WAL files 2.
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.787642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353033' seq:0, type:0; will stop at (end)
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3410KB)], [63(13MB)]
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845787693, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18098452, "oldest_snapshot_seqno": -1}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6671 keys, 16620064 bytes, temperature: kUnknown
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845957957, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 16620064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16573642, "index_size": 28646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174009, "raw_average_key_size": 26, "raw_value_size": 16451780, "raw_average_value_size": 2466, "num_data_blocks": 1141, "num_entries": 6671, "num_filter_entries": 6671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932845, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.958197) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16620064 bytes
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.959515) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.3 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 13.9 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(9.9) write-amplify(4.8) OK, records in: 7704, records dropped: 1033 output_compression: NoCompression
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.959542) EVENT_LOG_v1 {"time_micros": 1763932845959531, "job": 38, "event": "compaction_finished", "compaction_time_micros": 170335, "compaction_time_cpu_micros": 58410, "output_level": 6, "num_output_files": 1, "total_output_size": 16620064, "num_input_records": 7704, "num_output_records": 6671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845960400, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932845964346, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.787527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:45 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:20:45.964423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:20:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:46.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:48 np0005532762 nova_compute[230183]: 2025-11-23 21:20:48.364 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:48 np0005532762 nova_compute[230183]: 2025-11-23 21:20:48.545 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:20:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:48.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:20:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:50 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:50 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:20:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.078 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:20:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:20:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:20:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:20:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:51.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:53 np0005532762 nova_compute[230183]: 2025-11-23 21:20:53.366 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:53 np0005532762 nova_compute[230183]: 2025-11-23 21:20:53.546 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:55.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:20:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:58 np0005532762 nova_compute[230183]: 2025-11-23 21:20:58.370 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:58 np0005532762 nova_compute[230183]: 2025-11-23 21:20:58.546 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:20:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:20:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:20:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:20:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:20:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:20:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:00.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:03 np0005532762 nova_compute[230183]: 2025-11-23 21:21:03.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:03 np0005532762 nova_compute[230183]: 2025-11-23 21:21:03.547 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:04.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:21:06 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7004 writes, 36K keys, 7004 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 7004 writes, 7004 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1580 writes, 8388 keys, 1580 commit groups, 1.0 writes per commit group, ingest: 17.94 MB, 0.03 MB/s#012Interval WAL: 1580 writes, 1580 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     64.9      0.85              0.15        19    0.045       0      0       0.0       0.0#012  L6      1/0   15.85 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4     89.6     77.0      3.15              0.70        18    0.175    101K    10K       0.0       0.0#012 Sum      1/0   15.85 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     70.5     74.4      4.00              0.85        37    0.108    101K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9     85.0     88.5      0.99              0.30        10    0.099     34K   3616       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     89.6     77.0      3.15              0.70        18    0.175    101K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     65.1      0.85              0.15        18    0.047       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.015#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.12 MB/s write, 0.28 GB read, 0.12 MB/s read, 4.0 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560649e57350#2 capacity: 304.00 MB usage: 24.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00019 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1490,23.59 MB,7.75977%) FilterBlock(37,298.73 KB,0.0959647%) IndexBlock(37,520.92 KB,0.16734%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 16:21:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:07 np0005532762 nova_compute[230183]: 2025-11-23 21:21:07.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.386 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.549 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:21:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18675691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:21:08 np0005532762 nova_compute[230183]: 2025-11-23 21:21:08.880 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.402 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4842MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.403 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.465 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.465 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.493 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:21:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:09 np0005532762 podman[252902]: 2025-11-23 21:21:09.645611278 +0000 UTC m=+0.060859323 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:21:09 np0005532762 podman[252901]: 2025-11-23 21:21:09.6708456 +0000 UTC m=+0.086036683 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:21:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:21:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/46532897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.930 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.937 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.954 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.956 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:21:09 np0005532762 nova_compute[230183]: 2025-11-23 21:21:09.956 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:12 np0005532762 podman[252967]: 2025-11-23 21:21:12.637628663 +0000 UTC m=+0.057312847 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 16:21:12 np0005532762 nova_compute[230183]: 2025-11-23 21:21:12.957 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:12 np0005532762 nova_compute[230183]: 2025-11-23 21:21:12.957 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:12 np0005532762 nova_compute[230183]: 2025-11-23 21:21:12.958 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:13 np0005532762 nova_compute[230183]: 2025-11-23 21:21:13.413 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:13 np0005532762 nova_compute[230183]: 2025-11-23 21:21:13.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:21:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:21:13 np0005532762 nova_compute[230183]: 2025-11-23 21:21:13.550 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.426 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.439 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:21:14 np0005532762 nova_compute[230183]: 2025-11-23 21:21:14.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:21:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:17.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:18 np0005532762 nova_compute[230183]: 2025-11-23 21:21:18.416 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:18 np0005532762 nova_compute[230183]: 2025-11-23 21:21:18.552 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.861024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879861065, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 580, "num_deletes": 251, "total_data_size": 1036719, "memory_usage": 1047368, "flush_reason": "Manual Compaction"}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879869583, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 682582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36374, "largest_seqno": 36949, "table_properties": {"data_size": 679529, "index_size": 1025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7144, "raw_average_key_size": 19, "raw_value_size": 673450, "raw_average_value_size": 1815, "num_data_blocks": 44, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932846, "oldest_key_time": 1763932846, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8597 microseconds, and 4477 cpu microseconds.
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869620) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 682582 bytes OK
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.869638) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870692) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870708) EVENT_LOG_v1 {"time_micros": 1763932879870702, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.870725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1033404, prev total WAL file size 1033404, number of live WAL files 2.
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.871183) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(666KB)], [66(15MB)]
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879871218, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 17302646, "oldest_snapshot_seqno": -1}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6528 keys, 15181559 bytes, temperature: kUnknown
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879959164, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 15181559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15137261, "index_size": 26908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171742, "raw_average_key_size": 26, "raw_value_size": 15018910, "raw_average_value_size": 2300, "num_data_blocks": 1063, "num_entries": 6528, "num_filter_entries": 6528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.959417) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15181559 bytes
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.960530) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 172.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 15.9 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(47.6) write-amplify(22.2) OK, records in: 7042, records dropped: 514 output_compression: NoCompression
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.960551) EVENT_LOG_v1 {"time_micros": 1763932879960541, "job": 40, "event": "compaction_finished", "compaction_time_micros": 88015, "compaction_time_cpu_micros": 38831, "output_level": 6, "num_output_files": 1, "total_output_size": 15181559, "num_input_records": 7042, "num_output_records": 6528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879960800, "job": 40, "event": "table_file_deletion", "file_number": 68}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932879964503, "job": 40, "event": "table_file_deletion", "file_number": 66}
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.871112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:19 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:21:19.964633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:21:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:23 np0005532762 nova_compute[230183]: 2025-11-23 21:21:23.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:23 np0005532762 nova_compute[230183]: 2025-11-23 21:21:23.553 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:28 np0005532762 nova_compute[230183]: 2025-11-23 21:21:28.419 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:28 np0005532762 nova_compute[230183]: 2025-11-23 21:21:28.555 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:21:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:21:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:31.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:33 np0005532762 nova_compute[230183]: 2025-11-23 21:21:33.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:33 np0005532762 nova_compute[230183]: 2025-11-23 21:21:33.557 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:33.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:35.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:38 np0005532762 nova_compute[230183]: 2025-11-23 21:21:38.421 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:38 np0005532762 nova_compute[230183]: 2025-11-23 21:21:38.559 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:38.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:39.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:40 np0005532762 podman[253030]: 2025-11-23 21:21:40.635640898 +0000 UTC m=+0.050160377 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 16:21:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:40 np0005532762 podman[253029]: 2025-11-23 21:21:40.706814755 +0000 UTC m=+0.121412976 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 16:21:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:41.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:43 np0005532762 nova_compute[230183]: 2025-11-23 21:21:43.423 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:43 np0005532762 nova_compute[230183]: 2025-11-23 21:21:43.560 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:43.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:43 np0005532762 podman[253075]: 2025-11-23 21:21:43.633753873 +0000 UTC m=+0.048840972 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 16:21:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:44.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:45.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:46.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:48 np0005532762 nova_compute[230183]: 2025-11-23 21:21:48.423 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:48 np0005532762 nova_compute[230183]: 2025-11-23 21:21:48.562 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:48.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:49.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.079 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:21:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.080 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:21:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:21:51.080 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:21:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:51 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:21:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:53 np0005532762 nova_compute[230183]: 2025-11-23 21:21:53.461 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:53 np0005532762 nova_compute[230183]: 2025-11-23 21:21:53.564 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:53.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:56 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:56 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:21:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:21:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:21:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:57.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:21:58 np0005532762 nova_compute[230183]: 2025-11-23 21:21:58.464 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:58 np0005532762 nova_compute[230183]: 2025-11-23 21:21:58.564 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:21:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:21:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:21:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:21:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:21:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:21:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:21:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:22:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:01 np0005532762 nova_compute[230183]: 2025-11-23 21:22:01.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:01.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:03 np0005532762 nova_compute[230183]: 2025-11-23 21:22:03.466 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:03 np0005532762 nova_compute[230183]: 2025-11-23 21:22:03.565 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:06.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:07 np0005532762 nova_compute[230183]: 2025-11-23 21:22:07.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:07.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.460 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.461 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.501 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.568 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:22:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1469669657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:22:08 np0005532762 nova_compute[230183]: 2025-11-23 21:22:08.931 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.085 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.086 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4861MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.087 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.087 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.311 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.311 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.462 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.583 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.583 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.610 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:22:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.644 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:22:09 np0005532762 nova_compute[230183]: 2025-11-23 21:22:09.661 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:22:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:22:10 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107853304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.127 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.133 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.147 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.149 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.149 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.150 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.150 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.440 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:22:10 np0005532762 nova_compute[230183]: 2025-11-23 21:22:10.452 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:22:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:11 np0005532762 nova_compute[230183]: 2025-11-23 21:22:11.440 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:11 np0005532762 podman[253312]: 2025-11-23 21:22:11.627683234 +0000 UTC m=+0.044609469 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:22:11 np0005532762 podman[253311]: 2025-11-23 21:22:11.656930373 +0000 UTC m=+0.074206207 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:22:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:13 np0005532762 nova_compute[230183]: 2025-11-23 21:22:13.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:13 np0005532762 nova_compute[230183]: 2025-11-23 21:22:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:13 np0005532762 nova_compute[230183]: 2025-11-23 21:22:13.501 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:13 np0005532762 nova_compute[230183]: 2025-11-23 21:22:13.569 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:14 np0005532762 nova_compute[230183]: 2025-11-23 21:22:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:14 np0005532762 nova_compute[230183]: 2025-11-23 21:22:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:14 np0005532762 nova_compute[230183]: 2025-11-23 21:22:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:22:14 np0005532762 podman[253359]: 2025-11-23 21:22:14.649628875 +0000 UTC m=+0.062932846 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 16:22:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:15 np0005532762 nova_compute[230183]: 2025-11-23 21:22:15.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:15 np0005532762 nova_compute[230183]: 2025-11-23 21:22:15.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:22:15 np0005532762 nova_compute[230183]: 2025-11-23 21:22:15.429 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:22:15 np0005532762 nova_compute[230183]: 2025-11-23 21:22:15.447 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:22:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:16.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:18 np0005532762 nova_compute[230183]: 2025-11-23 21:22:18.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:18 np0005532762 nova_compute[230183]: 2025-11-23 21:22:18.503 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:18 np0005532762 nova_compute[230183]: 2025-11-23 21:22:18.570 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:22.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:23 np0005532762 nova_compute[230183]: 2025-11-23 21:22:23.507 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:23 np0005532762 nova_compute[230183]: 2025-11-23 21:22:23.571 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:22:23 np0005532762 nova_compute[230183]: 2025-11-23 21:22:23.573 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:24.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:27.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:28 np0005532762 nova_compute[230183]: 2025-11-23 21:22:28.508 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:28 np0005532762 nova_compute[230183]: 2025-11-23 21:22:28.574 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:22:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:22:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:29.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:30.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.829204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950829234, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 958, "num_deletes": 250, "total_data_size": 2084921, "memory_usage": 2117176, "flush_reason": "Manual Compaction"}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950837283, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 901115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36954, "largest_seqno": 37907, "table_properties": {"data_size": 897476, "index_size": 1355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9792, "raw_average_key_size": 20, "raw_value_size": 889693, "raw_average_value_size": 1901, "num_data_blocks": 58, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932880, "oldest_key_time": 1763932880, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8131 microseconds, and 3610 cpu microseconds.
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.837329) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 901115 bytes OK
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.837350) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838674) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838688) EVENT_LOG_v1 {"time_micros": 1763932950838683, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.838705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2080125, prev total WAL file size 2080125, number of live WAL files 2.
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.839522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(879KB)], [69(14MB)]
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950839657, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 16082674, "oldest_snapshot_seqno": -1}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6509 keys, 12457430 bytes, temperature: kUnknown
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950945675, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12457430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12417137, "index_size": 22903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171528, "raw_average_key_size": 26, "raw_value_size": 12303004, "raw_average_value_size": 1890, "num_data_blocks": 897, "num_entries": 6509, "num_filter_entries": 6509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763932950, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.945941) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12457430 bytes
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(31.7) write-amplify(13.8) OK, records in: 6996, records dropped: 487 output_compression: NoCompression
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.948659) EVENT_LOG_v1 {"time_micros": 1763932950948650, "job": 42, "event": "compaction_finished", "compaction_time_micros": 106082, "compaction_time_cpu_micros": 31243, "output_level": 6, "num_output_files": 1, "total_output_size": 12457430, "num_input_records": 6996, "num_output_records": 6509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950949057, "job": 42, "event": "table_file_deletion", "file_number": 71}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763932950952122, "job": 42, "event": "table_file_deletion", "file_number": 69}
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.839341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:30 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:22:30.952381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:22:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:22:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:31.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:22:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:33 np0005532762 nova_compute[230183]: 2025-11-23 21:22:33.510 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:33 np0005532762 nova_compute[230183]: 2025-11-23 21:22:33.574 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:33.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:34.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:35.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:36.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:37.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.576 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.578 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.727 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:38 np0005532762 nova_compute[230183]: 2025-11-23 21:22:38.727 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:22:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:38.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:39.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:41.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:42 np0005532762 podman[253419]: 2025-11-23 21:22:42.645073153 +0000 UTC m=+0.054005800 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:22:42 np0005532762 podman[253418]: 2025-11-23 21:22:42.668838026 +0000 UTC m=+0.084592784 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 16:22:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:42.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.728 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.729 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:22:43 np0005532762 nova_compute[230183]: 2025-11-23 21:22:43.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:44.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:45 np0005532762 radosgw[84498]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 23 16:22:45 np0005532762 podman[253463]: 2025-11-23 21:22:45.642736387 +0000 UTC m=+0.058217452 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 16:22:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:45.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:47.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:48 np0005532762 nova_compute[230183]: 2025-11-23 21:22:48.731 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:22:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:49.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:50.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:22:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:22:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:22:51.081 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:22:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:53 np0005532762 nova_compute[230183]: 2025-11-23 21:22:53.732 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:22:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:54.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:55.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:22:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:56.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:22:57 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:22:57 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:22:57 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:22:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:22:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:57.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:58 np0005532762 nova_compute[230183]: 2025-11-23 21:22:58.733 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:22:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:22:58.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:22:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:22:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:22:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:22:59.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:02 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:23:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:03 np0005532762 nova_compute[230183]: 2025-11-23 21:23:03.736 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:23:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:23:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:05.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:23:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:23:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:07 np0005532762 nova_compute[230183]: 2025-11-23 21:23:07.443 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:07.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:08 np0005532762 nova_compute[230183]: 2025-11-23 21:23:08.738 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.465 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.466 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:09.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:23:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/438076266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:23:09 np0005532762 nova_compute[230183]: 2025-11-23 21:23:09.903 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.177 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.178 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4866MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.178 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.179 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.339 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:23:10 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/438593123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:23:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.787 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.794 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.858 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.863 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:23:10 np0005532762 nova_compute[230183]: 2025-11-23 21:23:10.863 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:12 np0005532762 nova_compute[230183]: 2025-11-23 21:23:12.864 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:12 np0005532762 nova_compute[230183]: 2025-11-23 21:23:12.864 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:13 np0005532762 podman[253698]: 2025-11-23 21:23:13.645834151 +0000 UTC m=+0.049328594 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 16:23:13 np0005532762 podman[253697]: 2025-11-23 21:23:13.679615431 +0000 UTC m=+0.084254825 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 16:23:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:13 np0005532762 nova_compute[230183]: 2025-11-23 21:23:13.740 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:14 np0005532762 nova_compute[230183]: 2025-11-23 21:23:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:14 np0005532762 nova_compute[230183]: 2025-11-23 21:23:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:14 np0005532762 nova_compute[230183]: 2025-11-23 21:23:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:23:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:15 np0005532762 nova_compute[230183]: 2025-11-23 21:23:15.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.463 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.463 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:23:16 np0005532762 podman[253743]: 2025-11-23 21:23:16.665703676 +0000 UTC m=+0.074150919 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.779 230187 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:23:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:16 np0005532762 nova_compute[230183]: 2025-11-23 21:23:16.813 230187 DEBUG oslo_concurrency.processutils [None req-d89b790b-8376-465b-8448-23090b964ac1 8c34b8adab3049c9b4e37e075333da23 3f8fb5175f85402ba20cf9c6989d47cf - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:23:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.742 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.743 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.744 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.744 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.745 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:18 np0005532762 nova_compute[230183]: 2025-11-23 21:23:18.746 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:22 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:22.008 142158 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '3a:26:f0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:d5:4d:db:d5:2b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 16:23:22 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:22.009 142158 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 16:23:22 np0005532762 nova_compute[230183]: 2025-11-23 21:23:22.058 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:23 np0005532762 nova_compute[230183]: 2025-11-23 21:23:23.745 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:23 np0005532762 nova_compute[230183]: 2025-11-23 21:23:23.748 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:25.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:26.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:27.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:28 np0005532762 nova_compute[230183]: 2025-11-23 21:23:28.747 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:28 np0005532762 nova_compute[230183]: 2025-11-23 21:23:28.749 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:28.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:29 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:29.011 142158 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d8ff4ac4-2bee-48db-b79e-2466bc4db046, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 16:23:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:33 np0005532762 nova_compute[230183]: 2025-11-23 21:23:33.749 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:33 np0005532762 nova_compute[230183]: 2025-11-23 21:23:33.750 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:34.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:37.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:38 np0005532762 nova_compute[230183]: 2025-11-23 21:23:38.751 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:38.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:43 np0005532762 nova_compute[230183]: 2025-11-23 21:23:43.753 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:44 np0005532762 podman[253806]: 2025-11-23 21:23:44.663756652 +0000 UTC m=+0.070319547 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 16:23:44 np0005532762 podman[253805]: 2025-11-23 21:23:44.691201634 +0000 UTC m=+0.104552370 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 16:23:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:46 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:23:46 np0005532762 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 16:23:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:47 np0005532762 podman[253852]: 2025-11-23 21:23:47.687380234 +0000 UTC m=+0.100591395 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 16:23:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:48 np0005532762 nova_compute[230183]: 2025-11-23 21:23:48.755 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:50.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.083 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:23:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.083 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:23:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:23:51.084 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:23:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:52.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.756 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.758 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.759 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:53 np0005532762 nova_compute[230183]: 2025-11-23 21:23:53.760 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:23:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:23:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:54.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:56.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:23:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.761 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.762 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.782 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:23:58 np0005532762 nova_compute[230183]: 2025-11-23 21:23:58.783 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:23:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:23:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:23:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:23:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:23:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:23:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:00.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:03.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:03 np0005532762 nova_compute[230183]: 2025-11-23 21:24:03.784 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:24:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:04 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:24:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:04.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:07 np0005532762 nova_compute[230183]: 2025-11-23 21:24:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:07.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.785 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.787 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.788 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:08 np0005532762 nova_compute[230183]: 2025-11-23 21:24:08.790 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.455 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.455 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:24:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:11.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:24:11 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184943175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:24:11 np0005532762 nova_compute[230183]: 2025-11-23 21:24:11.886 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.047 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.048 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.123 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.124 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.150 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:24:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:24:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4061681963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.585 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.591 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.612 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.614 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:24:12 np0005532762 nova_compute[230183]: 2025-11-23 21:24:12.614 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:13 np0005532762 nova_compute[230183]: 2025-11-23 21:24:13.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:13 np0005532762 nova_compute[230183]: 2025-11-23 21:24:13.614 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:13 np0005532762 nova_compute[230183]: 2025-11-23 21:24:13.789 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:14 np0005532762 nova_compute[230183]: 2025-11-23 21:24:14.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:14 np0005532762 nova_compute[230183]: 2025-11-23 21:24:14.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:24:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:14.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:15 np0005532762 podman[254090]: 2025-11-23 21:24:15.650769938 +0000 UTC m=+0.055089141 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:24:15 np0005532762 podman[254089]: 2025-11-23 21:24:15.685650498 +0000 UTC m=+0.096636058 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 16:24:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.445 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:16 np0005532762 nova_compute[230183]: 2025-11-23 21:24:16.446 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:16.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:18 np0005532762 podman[254135]: 2025-11-23 21:24:18.651829948 +0000 UTC m=+0.071085808 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.791 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.794 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.842 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:18 np0005532762 nova_compute[230183]: 2025-11-23 21:24:18.843 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:20.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:21.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:22.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:23 np0005532762 nova_compute[230183]: 2025-11-23 21:24:23.442 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:24:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:23 np0005532762 nova_compute[230183]: 2025-11-23 21:24:23.844 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:24:27 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394746220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:24:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.845 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.847 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.886 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:28 np0005532762 nova_compute[230183]: 2025-11-23 21:24:28.886 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:28.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:33 np0005532762 nova_compute[230183]: 2025-11-23 21:24:33.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:33 np0005532762 nova_compute[230183]: 2025-11-23 21:24:33.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:33 np0005532762 nova_compute[230183]: 2025-11-23 21:24:33.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:24:33 np0005532762 nova_compute[230183]: 2025-11-23 21:24:33.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:33 np0005532762 nova_compute[230183]: 2025-11-23 21:24:33.890 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:38 np0005532762 nova_compute[230183]: 2025-11-23 21:24:38.888 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:24:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:24:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:42.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:43 np0005532762 nova_compute[230183]: 2025-11-23 21:24:43.889 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:46 np0005532762 podman[254195]: 2025-11-23 21:24:46.64622537 +0000 UTC m=+0.061630545 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 16:24:46 np0005532762 podman[254194]: 2025-11-23 21:24:46.704063952 +0000 UTC m=+0.113878118 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 23 16:24:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.892 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.893 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.894 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.894 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.930 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:48 np0005532762 nova_compute[230183]: 2025-11-23 21:24:48.931 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:24:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:49 np0005532762 podman[254240]: 2025-11-23 21:24:49.678542655 +0000 UTC m=+0.088944714 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 16:24:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.085 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:24:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.086 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:24:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:24:51.086 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:24:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:53 np0005532762 nova_compute[230183]: 2025-11-23 21:24:53.931 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:56.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.684531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096684574, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1636, "num_deletes": 251, "total_data_size": 4102582, "memory_usage": 4151296, "flush_reason": "Manual Compaction"}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096711618, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2679315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37912, "largest_seqno": 39543, "table_properties": {"data_size": 2672517, "index_size": 3933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14367, "raw_average_key_size": 20, "raw_value_size": 2658824, "raw_average_value_size": 3713, "num_data_blocks": 171, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763932950, "oldest_key_time": 1763932950, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 27128 microseconds, and 10271 cpu microseconds.
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711659) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2679315 bytes OK
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.711677) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712738) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712750) EVENT_LOG_v1 {"time_micros": 1763933096712746, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.712768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4095127, prev total WAL file size 4095127, number of live WAL files 2.
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.713728) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2616KB)], [72(11MB)]
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096713798, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 15136745, "oldest_snapshot_seqno": -1}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6709 keys, 12987279 bytes, temperature: kUnknown
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096801780, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12987279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12945252, "index_size": 24123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 176383, "raw_average_key_size": 26, "raw_value_size": 12827118, "raw_average_value_size": 1911, "num_data_blocks": 946, "num_entries": 6709, "num_filter_entries": 6709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763933096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.802307) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12987279 bytes
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.805342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.6 rd, 147.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 7225, records dropped: 516 output_compression: NoCompression
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.805376) EVENT_LOG_v1 {"time_micros": 1763933096805359, "job": 44, "event": "compaction_finished", "compaction_time_micros": 88213, "compaction_time_cpu_micros": 36118, "output_level": 6, "num_output_files": 1, "total_output_size": 12987279, "num_input_records": 7225, "num_output_records": 6709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096806487, "job": 44, "event": "table_file_deletion", "file_number": 74}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933096810536, "job": 44, "event": "table_file_deletion", "file_number": 72}
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.713494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:24:56.810612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:24:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:24:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:24:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:24:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:24:58.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:24:58 np0005532762 nova_compute[230183]: 2025-11-23 21:24:58.933 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:24:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:24:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:24:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:24:58.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:00.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:02.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:03 np0005532762 nova_compute[230183]: 2025-11-23 21:25:03.933 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:06.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:07 np0005532762 nova_compute[230183]: 2025-11-23 21:25:07.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:08 np0005532762 nova_compute[230183]: 2025-11-23 21:25:08.934 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:08 np0005532762 nova_compute[230183]: 2025-11-23 21:25:08.937 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:09 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 23 16:25:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:25:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:10 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:25:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.447 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.448 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:25:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:25:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2706958688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:25:12 np0005532762 nova_compute[230183]: 2025-11-23 21:25:12.914 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:25:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:12.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.064 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.065 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4857MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.066 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.066 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.118 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.131 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:25:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:25:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1955916470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.567 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.576 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.592 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.595 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.596 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:13 np0005532762 nova_compute[230183]: 2025-11-23 21:25:13.937 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:14 np0005532762 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:14 np0005532762 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:14 np0005532762 nova_compute[230183]: 2025-11-23 21:25:14.597 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:14 np0005532762 nova_compute[230183]: 2025-11-23 21:25:14.598 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:25:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:14.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:16 np0005532762 nova_compute[230183]: 2025-11-23 21:25:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:16 np0005532762 nova_compute[230183]: 2025-11-23 21:25:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:25:16 np0005532762 nova_compute[230183]: 2025-11-23 21:25:16.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:25:16 np0005532762 nova_compute[230183]: 2025-11-23 21:25:16.439 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:25:16 np0005532762 nova_compute[230183]: 2025-11-23 21:25:16.439 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:16 np0005532762 podman[254476]: 2025-11-23 21:25:16.973718603 +0000 UTC m=+0.049586743 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 16:25:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:16.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:17 np0005532762 podman[254475]: 2025-11-23 21:25:17.009702974 +0000 UTC m=+0.086877969 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:25:17 np0005532762 nova_compute[230183]: 2025-11-23 21:25:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:17 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:25:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:18 np0005532762 nova_compute[230183]: 2025-11-23 21:25:18.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:25:18 np0005532762 nova_compute[230183]: 2025-11-23 21:25:18.939 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.002000052s ======
Nov 23 16:25:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 23 16:25:20 np0005532762 podman[254525]: 2025-11-23 21:25:20.233542237 +0000 UTC m=+0.059427537 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:25:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:22.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:23 np0005532762 nova_compute[230183]: 2025-11-23 21:25:23.941 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:25:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:25.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:27.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:28 np0005532762 nova_compute[230183]: 2025-11-23 21:25:28.943 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:25:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:29.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:33 np0005532762 nova_compute[230183]: 2025-11-23 21:25:33.945 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:34.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:35.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:36.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:38.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.947 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.948 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.948 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.949 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.949 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:25:38 np0005532762 nova_compute[230183]: 2025-11-23 21:25:38.951 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:25:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:25:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:43.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:43 np0005532762 nova_compute[230183]: 2025-11-23 21:25:43.952 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:44.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:47.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:47 np0005532762 podman[254585]: 2025-11-23 21:25:47.671505896 +0000 UTC m=+0.071524639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 16:25:47 np0005532762 podman[254584]: 2025-11-23 21:25:47.756915394 +0000 UTC m=+0.154636125 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 16:25:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:48.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:48 np0005532762 nova_compute[230183]: 2025-11-23 21:25:48.953 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:50 np0005532762 podman[254655]: 2025-11-23 21:25:50.600788312 +0000 UTC m=+0.095705084 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 16:25:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:25:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:25:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:25:51.087 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:25:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:53 np0005532762 nova_compute[230183]: 2025-11-23 21:25:53.956 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:54.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:56.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:25:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:25:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:25:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:25:58.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:25:58 np0005532762 nova_compute[230183]: 2025-11-23 21:25:58.957 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:58 np0005532762 nova_compute[230183]: 2025-11-23 21:25:58.958 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:25:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:25:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:25:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:25:59.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:01.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.959 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.960 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.961 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:03 np0005532762 nova_compute[230183]: 2025-11-23 21:26:03.962 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:05.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:08 np0005532762 nova_compute[230183]: 2025-11-23 21:26:08.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:08 np0005532762 nova_compute[230183]: 2025-11-23 21:26:08.963 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:10.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.455 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.456 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.456 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:26:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:26:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3287437788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.879 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:26:13 np0005532762 nova_compute[230183]: 2025-11-23 21:26:13.963 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.037 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.038 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4871MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.038 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.039 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.119 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.120 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.164 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:26:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:26:14 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/983736605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.615 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.620 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.635 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.636 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:26:14 np0005532762 nova_compute[230183]: 2025-11-23 21:26:14.636 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:15 np0005532762 nova_compute[230183]: 2025-11-23 21:26:15.636 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:15 np0005532762 nova_compute[230183]: 2025-11-23 21:26:15.637 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:15 np0005532762 nova_compute[230183]: 2025-11-23 21:26:15.637 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:26:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:17 np0005532762 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:17 np0005532762 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:26:17 np0005532762 nova_compute[230183]: 2025-11-23 21:26:17.427 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:26:17 np0005532762 nova_compute[230183]: 2025-11-23 21:26:17.442 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:26:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:17 np0005532762 podman[254880]: 2025-11-23 21:26:17.724810625 +0000 UTC m=+0.053953150 container exec e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 23 16:26:17 np0005532762 podman[254880]: 2025-11-23 21:26:17.809676188 +0000 UTC m=+0.138818633 container exec_died e0f32b933903515922e5686c826cb40ce38f068428c3d1354877191c9eb6f008 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 23 16:26:17 np0005532762 podman[254915]: 2025-11-23 21:26:17.954116451 +0000 UTC m=+0.060284639 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 16:26:18 np0005532762 podman[254914]: 2025-11-23 21:26:18.011012119 +0000 UTC m=+0.122617212 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 16:26:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:18 np0005532762 podman[255040]: 2025-11-23 21:26:18.41331909 +0000 UTC m=+0.138462115 container exec 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:18 np0005532762 podman[255040]: 2025-11-23 21:26:18.616112969 +0000 UTC m=+0.341256064 container exec_died 64d60b8099df0a9bc1b978bb8d0ff809e5476e0bdc0e1ff07d52a594a6c59770 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.966 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.968 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.969 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.969 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.996 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:18 np0005532762 nova_compute[230183]: 2025-11-23 21:26:18.997 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:19.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:19 np0005532762 podman[255179]: 2025-11-23 21:26:19.316196444 +0000 UTC m=+0.067106291 container exec 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:26:19 np0005532762 podman[255179]: 2025-11-23 21:26:19.333370782 +0000 UTC m=+0.084280629 container exec_died 5efdb4ba0bcd5fe6f292f73f388707523f3095db64c5b10f074cdf2e15575dfb (image=quay.io/ceph/haproxy:2.3, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-haproxy-nfs-cephfs-compute-1-iwomei)
Nov 23 16:26:19 np0005532762 nova_compute[230183]: 2025-11-23 21:26:19.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:19 np0005532762 podman[255246]: 2025-11-23 21:26:19.525783115 +0000 UTC m=+0.044207851 container exec 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc.)
Nov 23 16:26:19 np0005532762 podman[255246]: 2025-11-23 21:26:19.537133467 +0000 UTC m=+0.055558183 container exec_died 2804f80c8f66202230c93ef9e5dfb79827d221d8c2f51d077915585a4021bec3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-03808be8-ae4a-5548-82e6-4a294f1bc627-keepalived-nfs-cephfs-compute-1-lwmzxc, io.openshift.expose-services=, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, description=keepalived for Ceph, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, release=1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 23 16:26:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:20 np0005532762 nova_compute[230183]: 2025-11-23 21:26:20.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:20 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 23 16:26:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:26:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:21 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:26:21 np0005532762 podman[255363]: 2025-11-23 21:26:21.669038873 +0000 UTC m=+0.079750788 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 16:26:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:23 np0005532762 nova_compute[230183]: 2025-11-23 21:26:23.996 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:24 np0005532762 nova_compute[230183]: 2025-11-23 21:26:23.999 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:24.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:26 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:26:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:27.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:27 np0005532762 nova_compute[230183]: 2025-11-23 21:26:27.422 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:26:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:29 np0005532762 nova_compute[230183]: 2025-11-23 21:26:29.001 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:26:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:26:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:32.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:34 np0005532762 nova_compute[230183]: 2025-11-23 21:26:34.002 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:34 np0005532762 nova_compute[230183]: 2025-11-23 21:26:34.004 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:34.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:39 np0005532762 nova_compute[230183]: 2025-11-23 21:26:39.004 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.006 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.009 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.041 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:44 np0005532762 nova_compute[230183]: 2025-11-23 21:26:44.042 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:48 np0005532762 podman[255454]: 2025-11-23 21:26:48.653425592 +0000 UTC m=+0.062139028 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 16:26:48 np0005532762 podman[255453]: 2025-11-23 21:26:48.709647762 +0000 UTC m=+0.119364605 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 16:26:49 np0005532762 nova_compute[230183]: 2025-11-23 21:26:49.042 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:49 np0005532762 nova_compute[230183]: 2025-11-23 21:26:49.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:50.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.088 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:26:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.089 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:26:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:26:51.089 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:26:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:52 np0005532762 podman[255525]: 2025-11-23 21:26:52.658673528 +0000 UTC m=+0.072618228 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 16:26:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.043 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.044 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.045 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.045 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:26:54 np0005532762 nova_compute[230183]: 2025-11-23 21:26:54.047 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:26:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:26:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:26:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:26:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:26:59 np0005532762 nova_compute[230183]: 2025-11-23 21:26:59.047 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:26:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:26:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:26:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:26:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.049 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.050 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.051 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:04 np0005532762 nova_compute[230183]: 2025-11-23 21:27:04.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 23 16:27:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 23 16:27:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:09 np0005532762 nova_compute[230183]: 2025-11-23 21:27:09.053 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:09.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:10 np0005532762 nova_compute[230183]: 2025-11-23 21:27:10.426 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.111071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231111168, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1593, "num_deletes": 255, "total_data_size": 4026113, "memory_usage": 4079280, "flush_reason": "Manual Compaction"}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231129016, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2630519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39548, "largest_seqno": 41136, "table_properties": {"data_size": 2623758, "index_size": 3896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14204, "raw_average_key_size": 19, "raw_value_size": 2610121, "raw_average_value_size": 3660, "num_data_blocks": 168, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763933097, "oldest_key_time": 1763933097, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17938 microseconds, and 5610 cpu microseconds.
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.129061) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2630519 bytes OK
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.129082) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130849) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130876) EVENT_LOG_v1 {"time_micros": 1763933231130858, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.130893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 4018763, prev total WAL file size 4018763, number of live WAL files 2.
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.131696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2568KB)], [75(12MB)]
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231131720, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15617798, "oldest_snapshot_seqno": -1}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6894 keys, 15452822 bytes, temperature: kUnknown
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231203771, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 15452822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15407049, "index_size": 27421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181176, "raw_average_key_size": 26, "raw_value_size": 15283088, "raw_average_value_size": 2216, "num_data_blocks": 1083, "num_entries": 6894, "num_filter_entries": 6894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763930466, "oldest_key_time": 0, "file_creation_time": 1763933231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7e89e5e7-e2ca-41cc-bef7-fd52c884a7cb", "db_session_id": "RYN2LDD9QR94TIN0USPF", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.204052) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15452822 bytes
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.209276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 214.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(11.8) write-amplify(5.9) OK, records in: 7422, records dropped: 528 output_compression: NoCompression
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.209296) EVENT_LOG_v1 {"time_micros": 1763933231209287, "job": 46, "event": "compaction_finished", "compaction_time_micros": 72172, "compaction_time_cpu_micros": 27394, "output_level": 6, "num_output_files": 1, "total_output_size": 15452822, "num_input_records": 7422, "num_output_records": 6894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231209894, "job": 46, "event": "table_file_deletion", "file_number": 77}
Nov 23 16:27:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763933231212978, "job": 46, "event": "table_file_deletion", "file_number": 75}
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.131639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:11 np0005532762 ceph-mon[80135]: rocksdb: (Original Log Time 2025/11/23-21:27:11.213084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 16:27:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:13.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.054 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.056 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.094 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.094 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:14.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.453 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.454 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.455 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.455 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:27:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:27:14 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1023863718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:27:14 np0005532762 nova_compute[230183]: 2025-11-23 21:27:14.926 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.097 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.098 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4862MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.099 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:15.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.296 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.297 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.452 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing inventories for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.543 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating ProviderTree inventory for provider bb217351-d4c8-44a4-9137-08393a1f72bc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.544 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Updating inventory in ProviderTree for provider bb217351-d4c8-44a4-9137-08393a1f72bc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.560 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing aggregate associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.590 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Refreshing trait associations for resource provider bb217351-d4c8-44a4-9137-08393a1f72bc, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 16:27:15 np0005532762 nova_compute[230183]: 2025-11-23 21:27:15.606 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:27:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:27:16 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/258649622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.087 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.091 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.108 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:16 np0005532762 nova_compute[230183]: 2025-11-23 21:27:16.110 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.122 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.123 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.123 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:27:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.448 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:27:17 np0005532762 nova_compute[230183]: 2025-11-23 21:27:17.449 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:18.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.095 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.097 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.098 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.128 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.130 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.428 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:19 np0005532762 nova_compute[230183]: 2025-11-23 21:27:19.428 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 16:27:19 np0005532762 podman[255628]: 2025-11-23 21:27:19.693206635 +0000 UTC m=+0.089991271 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 16:27:19 np0005532762 podman[255627]: 2025-11-23 21:27:19.745145751 +0000 UTC m=+0.146546780 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 16:27:20 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:20 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:20 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:20 np0005532762 nova_compute[230183]: 2025-11-23 21:27:20.443 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:20 np0005532762 nova_compute[230183]: 2025-11-23 21:27:20.444 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 16:27:20 np0005532762 nova_compute[230183]: 2025-11-23 21:27:20.460 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 16:27:21 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:21 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:21 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:21.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:21 np0005532762 nova_compute[230183]: 2025-11-23 21:27:21.444 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:22 np0005532762 nova_compute[230183]: 2025-11-23 21:27:22.423 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:27:22 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:22 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:22 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:22.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:22 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:23 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:23 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:23 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:23 np0005532762 podman[255674]: 2025-11-23 21:27:23.627696115 +0000 UTC m=+0.045189077 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.131 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.133 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.182 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:24 np0005532762 nova_compute[230183]: 2025-11-23 21:27:24.182 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:24 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:24 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:24 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:24.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:25 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:25 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:25 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:25.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:26 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:26 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:26 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:26.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:27 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:27 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:27 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:27.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:27 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:28 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:28 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:28 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:28.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.183 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.185 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.221 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:29 np0005532762 nova_compute[230183]: 2025-11-23 21:27:29.222 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:29 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:29 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:29 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:29.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:30 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:30 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:30 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:30.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:30 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:31 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:31 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:31 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 23 16:27:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:31 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 23 16:27:32 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:32 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:32 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:32 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:33 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:33 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:33 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.223 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.224 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:34 np0005532762 nova_compute[230183]: 2025-11-23 21:27:34.260 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:34 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:34 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:34 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:35 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:35 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:35 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:35 np0005532762 ceph-mon[80135]: from='mgr.14688 192.168.122.100:0/520882446' entity='mgr.compute-0.oyehye' 
Nov 23 16:27:36 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:36 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:36 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:37 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:37 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:37 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:37 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:38 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:38 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:38 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:39 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:39 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:39 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:39 np0005532762 nova_compute[230183]: 2025-11-23 21:27:39.294 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:39 np0005532762 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:39 np0005532762 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5035 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:39 np0005532762 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:39 np0005532762 nova_compute[230183]: 2025-11-23 21:27:39.296 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:40 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:40 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:40 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:41 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:41 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:41 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:42 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:42 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:42 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:42 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:42.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:43 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:43 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:43 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.298 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.300 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:44 np0005532762 nova_compute[230183]: 2025-11-23 21:27:44.332 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:44 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:44 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:44 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:44.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:45 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:45 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:45 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:46 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:46 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:46 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:47 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:47 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:47 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:47 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:48 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:48 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:48 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:49 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:49 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:49 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:49 np0005532762 nova_compute[230183]: 2025-11-23 21:27:49.333 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:49 np0005532762 nova_compute[230183]: 2025-11-23 21:27:49.335 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:50 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:50 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:50 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:50 np0005532762 podman[255841]: 2025-11-23 21:27:50.674433526 +0000 UTC m=+0.074849518 container health_status ef1df435f4083b27d588244fab568fccd7d3bb8074e85d3948aeaf98a3a921ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 23 16:27:50 np0005532762 podman[255840]: 2025-11-23 21:27:50.698178229 +0000 UTC m=+0.106985105 container health_status 5e88ce027731c107964786931bae9bef118899403d1be0c10049a593e10dea87 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 23 16:27:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.090 142158 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:27:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.090 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:27:51 np0005532762 ovn_metadata_agent[142153]: 2025-11-23 21:27:51.091 142158 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:27:51 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:51 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:51 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:52 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:52 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:52 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:52 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:52.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:53 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:53 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:53 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:54 np0005532762 systemd-logind[793]: New session 58 of user zuul.
Nov 23 16:27:54 np0005532762 systemd[1]: Started Session 58 of User zuul.
Nov 23 16:27:54 np0005532762 podman[255913]: 2025-11-23 21:27:54.264364233 +0000 UTC m=+0.089144038 container health_status 8f87c2ef0ebfc066de303dfc09f56e1137be5242124a780e8b721a21db810755 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 16:27:54 np0005532762 nova_compute[230183]: 2025-11-23 21:27:54.336 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:54 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:54 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:54 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:55 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:55 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:55 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:55.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:56 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:56 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:56 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:57 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:57 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:57 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:57.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:27:57 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 16:27:57 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1594982661' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 23 16:27:58 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:58 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:27:58 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:27:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:27:59 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:27:59 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:27:59 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:27:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.339 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.341 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.342 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.377 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:27:59 np0005532762 nova_compute[230183]: 2025-11-23 21:27:59.378 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:28:00 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:00 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:00 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:00 np0005532762 ovs-vsctl[256257]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 16:28:01 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:01 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:01 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:01 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 16:28:01 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 16:28:01 np0005532762 virtqemud[229705]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 16:28:02 np0005532762 systemd[1]: Starting dnf makecache...
Nov 23 16:28:02 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: cache status {prefix=cache status} (starting...)
Nov 23 16:28:02 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:02 np0005532762 dnf[256486]: Metadata cache refreshed recently.
Nov 23 16:28:02 np0005532762 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 16:28:02 np0005532762 systemd[1]: Finished dnf makecache.
Nov 23 16:28:02 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:02 np0005532762 lvm[256590]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 16:28:02 np0005532762 lvm[256590]: VG ceph_vg0 finished
Nov 23 16:28:02 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:02 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:02 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:02 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: client ls {prefix=client ls} (starting...)
Nov 23 16:28:02 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13909160' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 23 16:28:03 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:03 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:03 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:03.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2469849677' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 16:28:03 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 16:28:03 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352348249' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 23 16:28:04 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 16:28:04 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: ops {prefix=ops} (starting...)
Nov 23 16:28:04 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.378 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.380 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.380 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.381 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:28:04 np0005532762 nova_compute[230183]: 2025-11-23 21:28:04.417 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:28:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 16:28:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035369335' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 16:28:04 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:04 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:04 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:04 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 16:28:04 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1769929749' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: session ls {prefix=session ls} (starting...)
Nov 23 16:28:05 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm Can't run that command on an inactive MDS!
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2933207601' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mds[85352]: mds.cephfs.compute-1.gmfhnm asok_command: status {prefix=status} (starting...)
Nov 23 16:28:05 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:05 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:05 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:05.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4272356961' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1053523333' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2899678984' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857694526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 16:28:05 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/246114743' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/893255180' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:28:06 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:06 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:06 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1417216559' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 16:28:06 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1263003622' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2773208757' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 23 16:28:07 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:07 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:07 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:07.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1133566738' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 16:28:07 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1004930768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 23 16:28:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 16:28:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1316302909' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 23 16:28:08 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:08 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:08 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c452000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988246 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.244842529s of 58.250808716s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988378 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989890 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 1794048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.113847733s of 16.138629913s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805c633860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989167 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.863085747s of 13.866735458s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989299 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 1785856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990811 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 1777664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.973569870s of 14.986274719s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805c7ff400 session 0x55805cc7e5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805a7e6b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d8adc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990088 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1769472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.993670464s of 10.996788025s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990220 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991864 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 1761280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.335764885s of 11.355053902s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992785 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1753088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805c452b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805d25f0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992521 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.579681396s of 43.622188568s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992653 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 1744896 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995677 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.178874969s of 11.839152336s, submitted: 5
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cd7cb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805d565c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994363 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.315622330s of 29.319524765s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994495 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d64ac00 session 0x55805cd1c960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1736704 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999031 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.826273918s of 13.858831406s, submitted: 6
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 1728512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805d564f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805b7d72c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997717 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.386011124s of 20.388910294s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997849 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 1720320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805a67a960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805a7e5860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997258 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.066018105s of 12.072667122s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1712128 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 1703936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 1695744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.506420135s of 22.516693115s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 1687552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 1679360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 9173 writes, 35K keys, 9173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9173 writes, 2093 syncs, 4.38 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 845 writes, 1350 keys, 845 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 845 writes, 399 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558059077350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b434b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805d3f05a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000034s
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 1646592 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996535 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.324840546s of 47.329608917s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996667 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 1622016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 1613824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.818011284s of 13.826013565s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805ac0a000 session 0x55805cc80b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805cc7c1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 1605632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998047 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.236343384s of 33.268554688s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998179 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 1597440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001203 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 1589248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.904835701s of 10.988478661s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805d862f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805cc7cb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000480 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 1581056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.852489471s of 18.009616852s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 1572864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000612 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.888288498s of 13.898387909s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d564f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805b7d74a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 1564672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887771606s of 14.891558647s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,2])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999370 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 1425408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 65536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999298 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999430 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.794444084s of 14.003565788s, submitted: 364
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805c455c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 32768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000810 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.896261215s of 16.902111053s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000942 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 24576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002454 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 8192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.281729698s of 15.294480324s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c455a40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9000 session 0x55805c7f0000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a935800 session 0x55805c7f10e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001731 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.912832260s of 30.916051865s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001863 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001995 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.676693916s of 14.759789467s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a9f9c00 session 0x55805cc7eb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004296 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.021399498s of 11.037414551s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004164 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 0 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007320 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.964635849s of 10.006252289s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805b24c400 session 0x55805cc80780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f4d20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007518 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.184024811s of 46.203655243s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007650 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d651c00 session 0x55805d5652c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 ms_handle_reset con 0x55805d650800 session 0x55805c7ef0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.059599876s of 12.068504333s, submitted: 2
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006468 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x16ac66/0x224000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 1040384 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 142 handle_osd_map epochs [143,144], i have 142, src has [1,144]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.084339142s of 10.095458031s, submitted: 3
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016839 data_alloc: 218103808 data_used: 266240
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 2088960 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 145 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4aeb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 2031616 heap: 90136576 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc5dc000/0x0/0x4ffc00000, data 0x170fbd/0x22e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 18694144 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 146 ms_handle_reset con 0x55805b24c400 session 0x55805b69cf00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134726 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x11730f8/0x1233000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137156 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 18661376 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 18644992 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.137514114s of 14.367403030s, submitted: 61
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805a9f9000 session 0x55805d4afe00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88285184 unmapped: 18636800 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137024 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.764553070s of 15.769596100s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136316 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88293376 unmapped: 18628608 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 18620416 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139340 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138749 data_alloc: 218103808 data_used: 274432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805c4534a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d650800 session 0x55805d3f14a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11750ca/0x1236000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d651c00 session 0x55805d4721e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88309760 unmapped: 18612224 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.763429642s of 17.799776077s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d740000 session 0x55805b7d6b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805d64a800 session 0x55805d3f05a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138769 data_alloc: 218103808 data_used: 278528
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88326144 unmapped: 18595840 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb5d1000/0x0/0x4ffc00000, data 0x11771be/0x123a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 18587648 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 18579456 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805d92ad20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d651c00 session 0x55805d8bfa40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d740000 session 0x55805a6734a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805e2f1c00 session 0x55805cfb3a40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805b24c400 session 0x55805a7e7680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268233 data_alloc: 218103808 data_used: 278528
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89677824 unmapped: 21446656 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 ms_handle_reset con 0x55805d650800 session 0x55805cc7cd20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.449963570s of 10.389714241s, submitted: 77
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 89694208 unmapped: 21430272 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 20733952 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365949 data_alloc: 234881024 data_used: 14716928
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d5000/0x0/0x4ffc00000, data 0x207136b/0x2136000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367067 data_alloc: 234881024 data_used: 14716928
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa6d2000/0x0/0x4ffc00000, data 0x207333d/0x2139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367675 data_alloc: 234881024 data_used: 14733312
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 103882752 unmapped: 7241728 heap: 111124480 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.585947990s of 12.599705696s, submitted: 21
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114442240 unmapped: 876544 heap: 115318784 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8928000/0x0/0x4ffc00000, data 0x2c7e33d/0x2d44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,0,0,0,8])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 1097728 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88fe000/0x0/0x4ffc00000, data 0x2ca733d/0x2d6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475763 data_alloc: 234881024 data_used: 16691200
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 3555328 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f3000/0x0/0x4ffc00000, data 0x2cb333d/0x2d79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475931 data_alloc: 234881024 data_used: 16703488
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 3522560 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 3506176 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88f0000/0x0/0x4ffc00000, data 0x2cb633d/0x2d7c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 3497984 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.262884140s of 14.072373390s, submitted: 141
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 3481600 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476763 data_alloc: 234881024 data_used: 16764928
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 3473408 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805cc7c1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805cc80000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476259 data_alloc: 234881024 data_used: 16764928
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ef000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 3440640 heap: 117415936 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.868956566s of 10.927964211s, submitted: 6
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9400 session 0x55805c7f03c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd7c1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805d92a5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805d92bc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e7e00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c62000/0x0/0x4ffc00000, data 0x394339f/0x3a0a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d4af4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805cd1d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1570244 data_alloc: 234881024 data_used: 16769024
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 14835712 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d650800 session 0x55805cc80f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805a7e5860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 15720448 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 14475264 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665355 data_alloc: 234881024 data_used: 26218496
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121708544 unmapped: 7258112 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 7249920 heap: 128966656 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.360017776s of 17.521881104s, submitted: 38
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 6299648 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7c3c000/0x0/0x4ffc00000, data 0x39673d2/0x3a30000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6c19000/0x0/0x4ffc00000, data 0x498a3d2/0x4a53000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 7643136 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796399 data_alloc: 234881024 data_used: 26443776
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf8000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1796687 data_alloc: 234881024 data_used: 26435584
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f6bf7000/0x0/0x4ffc00000, data 0x49ab3d2/0x4a74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 8118272 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9800 session 0x55805a6730e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805a7e6b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.716887474s of 10.034674644s, submitted: 127
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805daf9c00 session 0x55805d8be3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851c000/0x0/0x4ffc00000, data 0x2cb733d/0x2d7d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492452 data_alloc: 234881024 data_used: 12238848
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f851b000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116072448 unmapped: 17383424 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d651c00 session 0x55805d25e000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d740000 session 0x55805cf8d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805d8be960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f88ee000/0x0/0x4ffc00000, data 0x2cb833d/0x2d7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805ac0a000 session 0x55805c454b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb2f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 26722304 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b69d2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180934 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.443714142s of 18.708480835s, submitted: 87
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181066 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26755072 heap: 133455872 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd7c3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805cd7cf00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1c00 session 0x55805b7f5c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b7f5a40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207078 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa062000/0x0/0x4ffc00000, data 0x15452db/0x160a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7f43c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b2a5a40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 33972224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1400 session 0x55805b2a4b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.920416832s of 12.955449104s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805b2a4960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211460 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34390016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238384 data_alloc: 218103808 data_used: 4263936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c2d000/0x0/0x4ffc00000, data 0x15692eb/0x162f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 34373632 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.594253540s of 12.610000610s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 31424512 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a24000/0x0/0x4ffc00000, data 0x17722eb/0x1838000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9a07000/0x0/0x4ffc00000, data 0x178f2eb/0x1855000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264450 data_alloc: 218103808 data_used: 4370432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99ff000/0x0/0x4ffc00000, data 0x17972eb/0x185d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 31088640 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263650 data_alloc: 218103808 data_used: 4370432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fd000/0x0/0x4ffc00000, data 0x17992eb/0x185f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 32440320 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.573128700s of 12.650348663s, submitted: 29
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263874 data_alloc: 218103808 data_used: 4370432
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99fc000/0x0/0x4ffc00000, data 0x179a2eb/0x1860000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cd74000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32415744 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.208628654s of 13.212368965s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b7f43c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186319 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8e1000 session 0x55805cfb2f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a7e6b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 35651584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.695281982s of 20.782047272s, submitted: 14
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805d25e5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9400 session 0x55805a673860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a672960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8be780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805cfb34a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805a7e74a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805b435c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 35627008 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x184034d/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251055 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 34578432 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92b0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d92a5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b69d2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 34537472 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0c00 session 0x55805b69cb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d567c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d567680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d566780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 34570240 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.721254349s of 10.977932930s, submitted: 86
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d5663c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9c00 session 0x55805d25ef00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d25fe00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d56d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d56de00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241073 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d92a5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af4000/0x0/0x4ffc00000, data 0x16a22eb/0x1768000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 33996800 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad9800 session 0x55805a7e6b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805a7e74a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb3680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242887 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 33988608 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107855872 unmapped: 32948224 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9af3000/0x0/0x4ffc00000, data 0x16a22fb/0x1769000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279063 data_alloc: 218103808 data_used: 5537792
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 107864064 unmapped: 32940032 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.606376648s of 16.673978806s, submitted: 14
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28213248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 27590656 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1368463 data_alloc: 218103808 data_used: 6819840
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 27697152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9099000/0x0/0x4ffc00000, data 0x20f42fb/0x21bb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362511 data_alloc: 218103808 data_used: 6823936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112549888 unmapped: 28254208 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.666546822s of 14.050541878s, submitted: 114
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f909e000/0x0/0x4ffc00000, data 0x20f72fb/0x21be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362735 data_alloc: 218103808 data_used: 6823936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 28246016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 28237824 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805b434780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d92ab40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb000 session 0x55805cd745a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805cd74f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8bfe00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 27705344 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396522 data_alloc: 218103808 data_used: 6823936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 27672576 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 27639808 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8be5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc1000/0x0/0x4ffc00000, data 0x24d335d/0x259b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8bf2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb400 session 0x55805d8bf860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050792694s of 10.149922371s, submitted: 31
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398336 data_alloc: 218103808 data_used: 6823936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8be000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 27623424 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 27525120 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115122176 unmapped: 25681920 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 25649152 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425240 data_alloc: 234881024 data_used: 10756096
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 25640960 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 25632768 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cc0000/0x0/0x4ffc00000, data 0x24d336d/0x259c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425048 data_alloc: 234881024 data_used: 10756096
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.226175308s of 12.244213104s, submitted: 6
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 25608192 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8b1f000/0x0/0x4ffc00000, data 0x267436d/0x273d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 22151168 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 22839296 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494334 data_alloc: 234881024 data_used: 11829248
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 22831104 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84ce000/0x0/0x4ffc00000, data 0x2cbd36d/0x2d86000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 16:28:08 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/18556835' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488982 data_alloc: 234881024 data_used: 11833344
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118038528 unmapped: 22765568 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7f4f00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c7f0960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22757376 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.527749062s of 10.052300453s, submitted: 105
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3004 syncs, 3.82 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2301 writes, 7858 keys, 2301 commit groups, 1.0 writes per commit group, ingest: 8.46 MB, 0.01 MB/s#012Interval WAL: 2301 writes, 911 syncs, 2.53 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f84d3000/0x0/0x4ffc00000, data 0x2cc036d/0x2d89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d92af00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371249 data_alloc: 218103808 data_used: 6823936
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 24436736 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8400 session 0x55805d8be3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805b7d7860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8cbd000/0x0/0x4ffc00000, data 0x20f92fb/0x21c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d4721e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 30138368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215158 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 30130176 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.834026337s of 34.967418671s, submitted: 43
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d6780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dad8800 session 0x55805d8ac3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d8ad680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233960 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 29532160 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d8ad860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111296512 unmapped: 29507584 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244525 data_alloc: 218103808 data_used: 1630208
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x12f12db/0x13b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 29499392 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.569715500s of 17.644144058s, submitted: 15
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 28090368 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335881 data_alloc: 218103808 data_used: 1634304
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114606080 unmapped: 26198016 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91fb000/0x0/0x4ffc00000, data 0x1f9c2db/0x2061000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115662848 unmapped: 25141248 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350955 data_alloc: 218103808 data_used: 2863104
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115671040 unmapped: 25133056 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 25124864 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350971 data_alloc: 218103808 data_used: 2863104
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25116672 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x1faa2db/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 25108480 heap: 140804096 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d56d680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d56de00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d25f860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351123 data_alloc: 218103808 data_used: 2867200
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.964529037s of 25.611534119s, submitted: 88
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d25ef00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d567c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacbc00 session 0x55805d566780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805d5663c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69d2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437857 data_alloc: 218103808 data_used: 2867200
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855c000/0x0/0x4ffc00000, data 0x2c3a2eb/0x2d00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 33177600 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d4afe00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 33161216 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 27746304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 25231360 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 25223168 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 25190400 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528574 data_alloc: 234881024 data_used: 14991360
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f855a000/0x0/0x4ffc00000, data 0x2c3b2eb/0x2d01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123002880 unmapped: 25149440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.176643372s of 17.280221939s, submitted: 17
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123772928 unmapped: 24379392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7989000/0x0/0x4ffc00000, data 0x380d2eb/0x38d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123813888 unmapped: 24338432 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 24330240 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123830272 unmapped: 24322048 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a935800 session 0x55805d56c780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a7f0000 session 0x55805b7f21e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 24313856 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616990 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.359371185s of 28.531023026s, submitted: 61
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 24305664 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123854848 unmapped: 24297472 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1617122 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123863040 unmapped: 24289280 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 24215552 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 24076288 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 22904832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125255680 unmapped: 22896640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 22888448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616282 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 125272064 unmapped: 22880256 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.529578209s of 15.271432877s, submitted: 389
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 23928832 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7983000/0x0/0x4ffc00000, data 0x38132eb/0x38d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616150 data_alloc: 234881024 data_used: 15196160
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 23920640 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 23912448 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d566000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.931664467s of 11.936676025s, submitted: 1
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d863e00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c453e00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360030 data_alloc: 218103808 data_used: 2863104
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 30932992 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d25e000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d3f14a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f91ec000/0x0/0x4ffc00000, data 0x1fab2db/0x2070000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805d4afc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238042 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 33013760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4ae000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d862d20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805c6323c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e2f0800 session 0x55805d4ae960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.715570450s of 28.907997131s, submitted: 67
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 32841728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b69cb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d4afa40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d3f10e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805a673860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d8623c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e88000/0x0/0x4ffc00000, data 0x130f2db/0x13d4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ad0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805d5641e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 32825344 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252852 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805cfb2000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805b7f4960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258466 data_alloc: 218103808 data_used: 815104
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x130f2eb/0x13d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacb800 session 0x55805cd752c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805c3fcb40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114671616 unmapped: 33480704 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d61e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114679808 unmapped: 33472512 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240867 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.481660843s of 18.543272018s, submitted: 18
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805cd7d2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d56de00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805dacac00 session 0x55805d56cf00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c452000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc400 session 0x55805c452780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f994c000/0x0/0x4ffc00000, data 0x184b2db/0x1910000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 33374208 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6fc800 session 0x55805d3f0d20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 33071104 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292449 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9928000/0x0/0x4ffc00000, data 0x186f2db/0x1934000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 33005568 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d5650e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da400 session 0x55805d8adc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 31719424 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805a673860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 34816000 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: mgrc ms_handle_reset ms_handle_reset con 0x55805cfc4c00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/844402651
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/844402651,v1:192.168.122.100:6801/844402651]
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: mgrc handle_mgr_configure stats_period=5
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f8400 session 0x55805d863860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d64ac00 session 0x55805b4350e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244819 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 34734080 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d25fa40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d25e000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ada40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0400 session 0x55805d92bc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.006832123s of 37.314971924s, submitted: 21
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d8ac3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d56c5a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d65dc20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805d3f1680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805d8bf2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277031 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db4000/0x0/0x4ffc00000, data 0x13e22eb/0x14a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 34373632 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b7d74a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805cd745a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 34365440 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc80b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc812c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 35921920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278845 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 36241408 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285837 data_alloc: 218103808 data_used: 1339392
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 36175872 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9db3000/0x0/0x4ffc00000, data 0x13e22fb/0x14a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.161880493s of 19.237621307s, submitted: 18
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 31727616 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 31547392 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361745 data_alloc: 218103808 data_used: 1740800
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9468000/0x0/0x4ffc00000, data 0x1d1e2fb/0x1de5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356265 data_alloc: 218103808 data_used: 1740800
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9453000/0x0/0x4ffc00000, data 0x1d422fb/0x1e09000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356569 data_alloc: 218103808 data_used: 1748992
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 31989760 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.781532288s of 14.450411797s, submitted: 111
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 31866880 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c1000 session 0x55805b69d4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d25e1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 31842304 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805b2a52c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9443000/0x0/0x4ffc00000, data 0x1d522fb/0x1e19000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 31834112 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 31825920 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253702 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 31817728 heap: 148152320 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d4730e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7fe00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0c00 session 0x55805cc7f2c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cc7ed20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.686355591s of 23.753026962s, submitted: 20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a673680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805a6734a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 38707200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc810e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc000 session 0x55805d3f1c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d3f01e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 38699008 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306052 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a7e63c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 38690816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f98d4000/0x0/0x4ffc00000, data 0x18c32db/0x1988000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355300 data_alloc: 218103808 data_used: 7626752
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118849536 unmapped: 36659200 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.395429611s of 17.444917679s, submitted: 6
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a7e6d20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805a7e65a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fcc00 session 0x55805cd752c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d4aef00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805c452780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 37216256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d80000/0x0/0x4ffc00000, data 0x24172db/0x24dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 35168256 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 35995648 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475407 data_alloc: 218103808 data_used: 7741440
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120201216 unmapped: 35307520 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d25fa40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x29ba2db/0x2a7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d4723c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483725 data_alloc: 218103808 data_used: 7733248
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 35291136 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd000 session 0x55805d65d680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cd74960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 35282944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123011072 unmapped: 32497664 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 25288704 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 25255936 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566132 data_alloc: 234881024 data_used: 19611648
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 25223168 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.900854111s of 21.241012573s, submitted: 73
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 21102592 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x29db30e/0x2aa2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,0,0,16,0,27])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 22020096 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629540 data_alloc: 234881024 data_used: 19615744
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 133537792 unmapped: 21970944 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 21340160 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f44000/0x0/0x4ffc00000, data 0x325130e/0x3318000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1635894 data_alloc: 234881024 data_used: 19615744
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134201344 unmapped: 21307392 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134225920 unmapped: 21282816 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255509377s of 12.832665443s, submitted: 75
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f41000/0x0/0x4ffc00000, data 0x325430e/0x331b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1634766 data_alloc: 234881024 data_used: 19615744
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 21258240 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d8be1e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805d8bf0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [1])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92b0e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f7f3b000/0x0/0x4ffc00000, data 0x325a30e/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 28704768 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416071 data_alloc: 218103808 data_used: 7733248
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d7da000 session 0x55805d472b40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d5652c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 34758656 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843377113s of 10.034677505s, submitted: 59
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d565860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9302000/0x0/0x4ffc00000, data 0x1e902db/0x1f55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 34750464 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa01c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273869 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120168448 unmapped: 35340288 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805a99b4a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805a99ba40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc800 session 0x55805d92a960
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805d92af00
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.899166107s of 25.910942078s, submitted: 4
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805d92ba40
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805d8ac3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805b7f4780
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd400 session 0x55805cfb30e0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805cfb23c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cfb3c20
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cfb3860
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 35102720 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fc400 session 0x55805c7f0000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805e6fd800 session 0x55805c7f12c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 35086336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321099 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356667 data_alloc: 218103808 data_used: 5595136
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 34996224 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.807069778s of 19.842288971s, submitted: 9
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f99f2000/0x0/0x4ffc00000, data 0x17a52db/0x186a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 31932416 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436141 data_alloc: 218103808 data_used: 6033408
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 30990336 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d6b7400 session 0x55805c7f05a0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d73d400 session 0x55805cd7c3c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.674288750s of 12.870928764s, submitted: 67
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f8d78000/0x0/0x4ffc00000, data 0x20002db/0x20c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805d8c0000 session 0x55805cc7e000
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 34136064 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 121356288 unmapped: 34152448 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 34668544 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 34684928 heap: 155508736 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 34627584 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf dump' '{prefix=perf dump}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf schema' '{prefix=perf schema}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 45809664 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 45801472 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 45793280 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120766464 unmapped: 45785088 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 45776896 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3842 syncs, 3.51 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2001 writes, 6920 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 6.50 MB, 0.01 MB/s#012Interval WAL: 2001 writes, 838 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 45768704 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 45760512 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 45752320 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 45744128 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120815616 unmapped: 45735936 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 45727744 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 45719552 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120840192 unmapped: 45711360 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 45703168 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 316.868835449s of 316.921936035s, submitted: 19
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 45694976 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284700 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 45670400 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 44425216 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4f9c0c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [1,0,0,1])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 43278336 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 43270144 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123289600 unmapped: 43261952 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123297792 unmapped: 43253760 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123297792 unmapped: 43253760 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805b24c400 session 0x55805c453680
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123305984 unmapped: 43245568 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 43237376 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123322368 unmapped: 43229184 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123330560 unmapped: 43220992 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123338752 unmapped: 43212800 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123346944 unmapped: 43204608 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 43196416 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123363328 unmapped: 43188224 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123371520 unmapped: 43180032 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123379712 unmapped: 43171840 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123387904 unmapped: 43163648 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 43155456 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 43147264 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 43139072 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 43130880 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 43122688 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 ms_handle_reset con 0x55805a9f9c00 session 0x55805c4532c0
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 43114496 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 43106304 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 43286528 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1284613 data_alloc: 218103808 data_used: 286720
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: prioritycache tune_memory target: 4294967296 mapped: 123215872 unmapped: 43335680 heap: 166551552 old mem: 2845415833 new mem: 2845415833
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fac2c000/0x0/0x4ffc00000, data 0x117b2db/0x1240000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Nov 23 16:28:08 np0005532762 ceph-osd[77613]: do_command 'log dump' '{prefix=log dump}'
Nov 23 16:28:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 16:28:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1842955223' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 23 16:28:09 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:09 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:09 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.418 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.420 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.450 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:28:09 np0005532762 nova_compute[230183]: 2025-11-23 21:28:09.451 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 16:28:09 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 16:28:09 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2176844904' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 23 16:28:10 np0005532762 nova_compute[230183]: 2025-11-23 21:28:10.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:10 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:10 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:10 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:10.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:10 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 16:28:10 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372307337' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1833908746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 23 16:28:11 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:11 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:11 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:11.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/706782067' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 16:28:11 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668548967' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/831638846' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3240368654' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2960342545' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:12 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:12 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:12 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:12.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1601245962' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1175955785' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 16:28:12 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2377348041' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3375474678' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 23 16:28:13 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:13 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:13 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:13.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1180863002' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/660462273' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 16:28:13 np0005532762 systemd[1]: Starting Hostname Service...
Nov 23 16:28:13 np0005532762 systemd[1]: Started Hostname Service.
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824463432' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4029995768' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 23 16:28:13 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/890621600' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 23 16:28:14 np0005532762 nova_compute[230183]: 2025-11-23 21:28:14.451 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:28:14 np0005532762 nova_compute[230183]: 2025-11-23 21:28:14.452 230187 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 16:28:14 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:14 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:14 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 23 16:28:14 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191927864' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 23 16:28:14 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 23 16:28:14 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/65066657' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 23 16:28:15 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:15 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:15 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 16:28:15 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3172043641' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 23 16:28:15 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 16:28:15 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1750229248' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4188482623' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.427 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.456 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.457 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:28:16 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:16 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:16 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2146727687' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2379998528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:28:16 np0005532762 nova_compute[230183]: 2025-11-23 21:28:16.901 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 23 16:28:16 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/241993972' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.042 230187 WARNING nova.virt.libvirt.driver [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4526MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.044 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.101 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.102 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.122 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 16:28:17 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:17 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 23 16:28:17 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/903892447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.569 230187 DEBUG oslo_concurrency.processutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.575 230187 DEBUG nova.compute.provider_tree [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed in ProviderTree for provider: bb217351-d4c8-44a4-9137-08393a1f72bc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.591 230187 DEBUG nova.scheduler.client.report [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Inventory has not changed for provider bb217351-d4c8-44a4-9137-08393a1f72bc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.593 230187 DEBUG nova.compute.resource_tracker [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 16:28:17 np0005532762 nova_compute[230183]: 2025-11-23 21:28:17.594 230187 DEBUG oslo_concurrency.lockutils [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 23 16:28:17 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/159930549' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 23 16:28:18 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:18 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:18 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.100 - anonymous [23/Nov/2025:21:28:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.595 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG oslo_service.periodic_task [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 16:28:18 np0005532762 nova_compute[230183]: 2025-11-23 21:28:18.617 230187 DEBUG nova.compute.manager [None req-7f86a359-183e-47cc-9ba7-595ac26726ac - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 16:28:19 np0005532762 ceph-mon[80135]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 23 16:28:19 np0005532762 ceph-mon[80135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1544273329' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 23 16:28:19 np0005532762 radosgw[84498]: ====== starting new request req=0x7f8382b025d0 =====
Nov 23 16:28:19 np0005532762 radosgw[84498]: ====== req done req=0x7f8382b025d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 23 16:28:19 np0005532762 radosgw[84498]: beast: 0x7f8382b025d0: 192.168.122.102 - anonymous [23/Nov/2025:21:28:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
